2025-09-16 00:00:08.459972 | Job console starting 2025-09-16 00:00:08.478702 | Updating git repos 2025-09-16 00:00:08.815260 | Cloning repos into workspace 2025-09-16 00:00:09.013677 | Restoring repo states 2025-09-16 00:00:09.042372 | Merging changes 2025-09-16 00:00:09.042394 | Checking out repos 2025-09-16 00:00:09.469939 | Preparing playbooks 2025-09-16 00:00:10.222999 | Running Ansible setup 2025-09-16 00:00:15.620272 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-16 00:00:16.622187 | 2025-09-16 00:00:16.622307 | PLAY [Base pre] 2025-09-16 00:00:16.667099 | 2025-09-16 00:00:16.667235 | TASK [Setup log path fact] 2025-09-16 00:00:16.695689 | orchestrator | ok 2025-09-16 00:00:16.786015 | 2025-09-16 00:00:16.786154 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-16 00:00:16.854120 | orchestrator | ok 2025-09-16 00:00:16.877196 | 2025-09-16 00:00:16.877296 | TASK [emit-job-header : Print job information] 2025-09-16 00:00:16.942117 | # Job Information 2025-09-16 00:00:16.942259 | Ansible Version: 2.16.14 2025-09-16 00:00:16.942288 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-09-16 00:00:16.942316 | Pipeline: periodic-midnight 2025-09-16 00:00:16.942363 | Executor: 521e9411259a 2025-09-16 00:00:16.942381 | Triggered by: https://github.com/osism/testbed 2025-09-16 00:00:16.942399 | Event ID: 23c7092edcbb46448d9899729bd794ad 2025-09-16 00:00:16.950550 | 2025-09-16 00:00:16.950648 | LOOP [emit-job-header : Print node information] 2025-09-16 00:00:17.161248 | orchestrator | ok: 2025-09-16 00:00:17.161432 | orchestrator | # Node Information 2025-09-16 00:00:17.161535 | orchestrator | Inventory Hostname: orchestrator 2025-09-16 00:00:17.161615 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-16 00:00:17.161642 | orchestrator | Username: zuul-testbed03 2025-09-16 00:00:17.161664 | orchestrator | Distro: Debian 12.12 2025-09-16 00:00:17.161688 | orchestrator | Provider: static-testbed 2025-09-16 00:00:17.161709 | orchestrator | Region: 2025-09-16 00:00:17.161730 | orchestrator | Label: testbed-orchestrator 2025-09-16 00:00:17.161751 | orchestrator | Product Name: OpenStack Nova 2025-09-16 00:00:17.161771 | orchestrator | Interface IP: 81.163.193.140 2025-09-16 00:00:17.191878 | 2025-09-16 00:00:17.191980 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-16 00:00:18.059992 | orchestrator -> localhost | changed 2025-09-16 00:00:18.067427 | 2025-09-16 00:00:18.067524 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-16 00:00:21.026840 | orchestrator -> localhost | changed 2025-09-16 00:00:21.049796 | 2025-09-16 00:00:21.049949 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-16 00:00:21.916280 | orchestrator -> localhost | ok 2025-09-16 00:00:21.921922 | 2025-09-16 00:00:21.922012 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-16 00:00:21.949078 | orchestrator | ok 2025-09-16 00:00:21.979910 | orchestrator | included: /var/lib/zuul/builds/a8a25034b5de43c9aad8dd2bdd5f1f51/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-16 00:00:21.989197 | 2025-09-16 00:00:21.989282 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-16 00:00:26.645913 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-16 00:00:26.646115 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/a8a25034b5de43c9aad8dd2bdd5f1f51/work/a8a25034b5de43c9aad8dd2bdd5f1f51_id_rsa 2025-09-16 00:00:26.646150 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/a8a25034b5de43c9aad8dd2bdd5f1f51/work/a8a25034b5de43c9aad8dd2bdd5f1f51_id_rsa.pub 2025-09-16 00:00:26.646173 | orchestrator -> localhost | The key fingerprint is: 2025-09-16 00:00:26.646194 | orchestrator -> localhost | SHA256:E2rxSDM08EZ1QyjAHDC9KEl2GfyxmE1GlVB6AuJzerc zuul-build-sshkey 2025-09-16 00:00:26.646213 | orchestrator -> localhost | The key's randomart image is: 2025-09-16 00:00:26.646241 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-16 00:00:26.646261 | orchestrator -> localhost | | .+BB*B+oo+ | 2025-09-16 00:00:26.646279 | orchestrator -> localhost | |.o.==Boo.. . | 2025-09-16 00:00:26.646297 | orchestrator -> localhost | |o+.oB+Xo. | 2025-09-16 00:00:26.646313 | orchestrator -> localhost | |o =o.*oB . | 2025-09-16 00:00:26.646356 | orchestrator -> localhost | | o . .+ S | 2025-09-16 00:00:26.646378 | orchestrator -> localhost | | . ... . | 2025-09-16 00:00:26.646396 | orchestrator -> localhost | | E | 2025-09-16 00:00:26.646413 | orchestrator -> localhost | | | 2025-09-16 00:00:26.646431 | orchestrator -> localhost | | | 2025-09-16 00:00:26.646449 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-16 00:00:26.646496 | orchestrator -> localhost | ok: Runtime: 0:00:03.487786 2025-09-16 00:00:26.652484 | 2025-09-16 00:00:26.652574 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-16 00:00:26.680151 | orchestrator | ok 2025-09-16 00:00:26.692420 | orchestrator | included: /var/lib/zuul/builds/a8a25034b5de43c9aad8dd2bdd5f1f51/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-16 00:00:26.706580 | 2025-09-16 00:00:26.706670 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-16 00:00:26.756680 | orchestrator | skipping: Conditional result was False 2025-09-16 00:00:26.763114 | 2025-09-16 00:00:26.763200 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-16 00:00:27.536617 | orchestrator | changed 2025-09-16 00:00:27.541605 | 2025-09-16 00:00:27.541687 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-16 00:00:27.829698 | orchestrator | ok 2025-09-16 00:00:27.834697 | 2025-09-16 00:00:27.834773 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-16 00:00:28.274223 | orchestrator | ok 2025-09-16 00:00:28.287556 | 2025-09-16 00:00:28.287650 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-16 00:00:28.794963 | orchestrator | ok 2025-09-16 00:00:28.800966 | 2025-09-16 00:00:28.801045 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-16 00:00:28.861768 | orchestrator | skipping: Conditional result was False 2025-09-16 00:00:28.868041 | 2025-09-16 00:00:28.868125 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-16 00:00:29.968690 | orchestrator -> localhost | changed 2025-09-16 00:00:29.980856 | 2025-09-16 00:00:29.980941 | TASK [add-build-sshkey : Add back temp key] 2025-09-16 00:00:30.730600 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/a8a25034b5de43c9aad8dd2bdd5f1f51/work/a8a25034b5de43c9aad8dd2bdd5f1f51_id_rsa (zuul-build-sshkey) 2025-09-16 00:00:30.730783 | orchestrator -> localhost | ok: Runtime: 0:00:00.029282 2025-09-16 00:00:30.736768 | 2025-09-16 00:00:30.736850 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-16 00:00:31.344160 | orchestrator | ok 2025-09-16 00:00:31.348930 | 2025-09-16 00:00:31.349004 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-16 00:00:31.387213 | orchestrator | skipping: Conditional result was False 2025-09-16 00:00:31.448909 | 2025-09-16 00:00:31.449007 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-16 00:00:32.018242 | orchestrator | ok 2025-09-16 00:00:32.032288 | 2025-09-16 00:00:32.032407 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-16 00:00:32.084105 | orchestrator | ok 2025-09-16 00:00:32.090025 | 2025-09-16 00:00:32.090104 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-16 00:00:32.705368 | orchestrator -> localhost | ok 2025-09-16 00:00:32.711383 | 2025-09-16 00:00:32.711461 | TASK [validate-host : Collect information about the host] 2025-09-16 00:00:34.283040 | orchestrator | ok 2025-09-16 00:00:34.304712 | 2025-09-16 00:00:34.312013 | TASK [validate-host : Sanitize hostname] 2025-09-16 00:00:34.382283 | orchestrator | ok 2025-09-16 00:00:34.387075 | 2025-09-16 00:00:34.387163 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-16 00:00:35.157663 | orchestrator -> localhost | changed 2025-09-16 00:00:35.163070 | 2025-09-16 00:00:35.163153 | TASK [validate-host : Collect information about zuul worker] 2025-09-16 00:00:35.607471 | orchestrator | ok 2025-09-16 00:00:35.612151 | 2025-09-16 00:00:35.612238 | TASK [validate-host : Write out all zuul information for each host] 2025-09-16 00:00:36.178896 | orchestrator -> localhost | changed 2025-09-16 00:00:36.187491 | 2025-09-16 00:00:36.187584 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-16 00:00:36.484486 | orchestrator | ok 2025-09-16 00:00:36.513378 | 2025-09-16 00:00:36.513486 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-16 00:01:13.996661 | orchestrator | changed: 2025-09-16 00:01:13.996850 | orchestrator | .d..t...... src/ 2025-09-16 00:01:13.996885 | orchestrator | .d..t...... src/github.com/ 2025-09-16 00:01:13.996918 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-16 00:01:13.996941 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-16 00:01:13.996963 | orchestrator | RedHat.yml 2025-09-16 00:01:14.024412 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-16 00:01:14.024430 | orchestrator | RedHat.yml 2025-09-16 00:01:14.024485 | orchestrator | = 2.2.0"... 2025-09-16 00:01:25.450626 | orchestrator | 00:01:25.450 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-16 00:01:25.479600 | orchestrator | 00:01:25.479 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-09-16 00:01:25.982417 | orchestrator | 00:01:25.982 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-16 00:01:26.404780 | orchestrator | 00:01:26.404 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-16 00:01:26.480381 | orchestrator | 00:01:26.480 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-16 00:01:27.341679 | orchestrator | 00:01:27.341 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-16 00:01:27.416830 | orchestrator | 00:01:27.416 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-16 00:01:28.333511 | orchestrator | 00:01:28.333 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-16 00:01:28.333593 | orchestrator | 00:01:28.333 STDOUT terraform: Providers are signed by their developers. 2025-09-16 00:01:28.333651 | orchestrator | 00:01:28.333 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-16 00:01:28.333693 | orchestrator | 00:01:28.333 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-16 00:01:28.333835 | orchestrator | 00:01:28.333 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-16 00:01:28.333912 | orchestrator | 00:01:28.333 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-16 00:01:28.333966 | orchestrator | 00:01:28.333 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-16 00:01:28.333978 | orchestrator | 00:01:28.333 STDOUT terraform: you run "tofu init" in the future. 2025-09-16 00:01:28.334581 | orchestrator | 00:01:28.334 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-16 00:01:28.334747 | orchestrator | 00:01:28.334 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-16 00:01:28.334796 | orchestrator | 00:01:28.334 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-16 00:01:28.334807 | orchestrator | 00:01:28.334 STDOUT terraform: should now work. 2025-09-16 00:01:28.334877 | orchestrator | 00:01:28.334 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-16 00:01:28.334927 | orchestrator | 00:01:28.334 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-16 00:01:28.334999 | orchestrator | 00:01:28.334 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-16 00:01:28.427420 | orchestrator | 00:01:28.427 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-16 00:01:28.427540 | orchestrator | 00:01:28.427 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-16 00:01:28.619437 | orchestrator | 00:01:28.619 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-16 00:01:28.619518 | orchestrator | 00:01:28.619 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-16 00:01:28.619530 | orchestrator | 00:01:28.619 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-16 00:01:28.619535 | orchestrator | 00:01:28.619 STDOUT terraform: for this configuration. 2025-09-16 00:01:28.767925 | orchestrator | 00:01:28.767 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-16 00:01:28.767991 | orchestrator | 00:01:28.767 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-16 00:01:28.851442 | orchestrator | 00:01:28.851 STDOUT terraform: ci.auto.tfvars 2025-09-16 00:01:29.733323 | orchestrator | 00:01:29.732 STDOUT terraform: default_custom.tf 2025-09-16 00:01:30.334999 | orchestrator | 00:01:30.331 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-16 00:01:31.228438 | orchestrator | 00:01:31.227 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-16 00:01:31.776487 | orchestrator | 00:01:31.776 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-16 00:01:32.335126 | orchestrator | 00:01:32.334 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-16 00:01:32.335221 | orchestrator | 00:01:32.335 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-16 00:01:32.335293 | orchestrator | 00:01:32.335 STDOUT terraform:  + create 2025-09-16 00:01:32.335317 | orchestrator | 00:01:32.335 STDOUT terraform:  <= read (data resources) 2025-09-16 00:01:32.335353 | orchestrator | 00:01:32.335 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-16 00:01:32.335673 | orchestrator | 00:01:32.335 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-16 00:01:32.335754 | orchestrator | 00:01:32.335 STDOUT terraform:  # (config refers to values not yet known) 2025-09-16 00:01:32.335765 | orchestrator | 00:01:32.335 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-16 00:01:32.335774 | orchestrator | 00:01:32.335 STDOUT terraform:  + checksum = (known after apply) 2025-09-16 00:01:32.335806 | orchestrator | 00:01:32.335 STDOUT terraform:  + created_at = (known after apply) 2025-09-16 00:01:32.335837 | orchestrator | 00:01:32.335 STDOUT terraform:  + file = (known after apply) 2025-09-16 00:01:32.335871 | orchestrator | 00:01:32.335 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.335903 | orchestrator | 00:01:32.335 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.335931 | orchestrator | 00:01:32.335 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-16 00:01:32.335958 | orchestrator | 00:01:32.335 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-16 00:01:32.335981 | orchestrator | 00:01:32.335 STDOUT terraform:  + most_recent = true 2025-09-16 00:01:32.336010 | orchestrator | 00:01:32.335 STDOUT terraform:  + name = (known after apply) 2025-09-16 00:01:32.336040 | orchestrator | 00:01:32.336 STDOUT terraform:  + protected = (known after apply) 2025-09-16 00:01:32.336075 | orchestrator | 00:01:32.336 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.336107 | orchestrator | 00:01:32.336 STDOUT terraform:  + schema = (known after apply) 2025-09-16 00:01:32.336137 | orchestrator | 00:01:32.336 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-16 00:01:32.336167 | orchestrator | 00:01:32.336 STDOUT terraform:  + tags = (known after apply) 2025-09-16 00:01:32.336197 | orchestrator | 00:01:32.336 STDOUT terraform:  + updated_at = (known after apply) 2025-09-16 00:01:32.336206 | orchestrator | 00:01:32.336 STDOUT terraform:  } 2025-09-16 00:01:32.336271 | orchestrator | 00:01:32.336 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-16 00:01:32.336297 | orchestrator | 00:01:32.336 STDOUT terraform:  # (config refers to values not yet known) 2025-09-16 00:01:32.336335 | orchestrator | 00:01:32.336 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-16 00:01:32.336364 | orchestrator | 00:01:32.336 STDOUT terraform:  + checksum = (known after apply) 2025-09-16 00:01:32.336395 | orchestrator | 00:01:32.336 STDOUT terraform:  + created_at = (known after apply) 2025-09-16 00:01:32.336426 | orchestrator | 00:01:32.336 STDOUT terraform:  + file = (known after apply) 2025-09-16 00:01:32.336455 | orchestrator | 00:01:32.336 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.336484 | orchestrator | 00:01:32.336 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.336516 | orchestrator | 00:01:32.336 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-16 00:01:32.336546 | orchestrator | 00:01:32.336 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-16 00:01:32.336575 | orchestrator | 00:01:32.336 STDOUT terraform:  + most_recent = true 2025-09-16 00:01:32.336597 | orchestrator | 00:01:32.336 STDOUT terraform:  + name = (known after apply) 2025-09-16 00:01:32.336626 | orchestrator | 00:01:32.336 STDOUT terraform:  + protected = (known after apply) 2025-09-16 00:01:32.336658 | orchestrator | 00:01:32.336 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.336688 | orchestrator | 00:01:32.336 STDOUT terraform:  + schema = (known after apply) 2025-09-16 00:01:32.336729 | orchestrator | 00:01:32.336 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-16 00:01:32.336758 | orchestrator | 00:01:32.336 STDOUT terraform:  + tags = (known after apply) 2025-09-16 00:01:32.336789 | orchestrator | 00:01:32.336 STDOUT terraform:  + updated_at = (known after apply) 2025-09-16 00:01:32.336797 | orchestrator | 00:01:32.336 STDOUT terraform:  } 2025-09-16 00:01:32.336830 | orchestrator | 00:01:32.336 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-16 00:01:32.336860 | orchestrator | 00:01:32.336 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-16 00:01:32.336901 | orchestrator | 00:01:32.336 STDOUT terraform:  + content = (known after apply) 2025-09-16 00:01:32.336940 | orchestrator | 00:01:32.336 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-16 00:01:32.336975 | orchestrator | 00:01:32.336 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-16 00:01:32.337011 | orchestrator | 00:01:32.336 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-16 00:01:32.337086 | orchestrator | 00:01:32.337 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-16 00:01:32.337129 | orchestrator | 00:01:32.337 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-16 00:01:32.337156 | orchestrator | 00:01:32.337 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-16 00:01:32.337181 | orchestrator | 00:01:32.337 STDOUT terraform:  + directory_permission = "0777" 2025-09-16 00:01:32.337209 | orchestrator | 00:01:32.337 STDOUT terraform:  + file_permission = "0644" 2025-09-16 00:01:32.337248 | orchestrator | 00:01:32.337 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-16 00:01:32.337286 | orchestrator | 00:01:32.337 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.337295 | orchestrator | 00:01:32.337 STDOUT terraform:  } 2025-09-16 00:01:32.337327 | orchestrator | 00:01:32.337 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-16 00:01:32.337353 | orchestrator | 00:01:32.337 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-16 00:01:32.337392 | orchestrator | 00:01:32.337 STDOUT terraform:  + content = (known after apply) 2025-09-16 00:01:32.337442 | orchestrator | 00:01:32.337 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-16 00:01:32.337473 | orchestrator | 00:01:32.337 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-16 00:01:32.337510 | orchestrator | 00:01:32.337 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-16 00:01:32.337546 | orchestrator | 00:01:32.337 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-16 00:01:32.337584 | orchestrator | 00:01:32.337 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-16 00:01:32.337621 | orchestrator | 00:01:32.337 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-16 00:01:32.337646 | orchestrator | 00:01:32.337 STDOUT terraform:  + directory_permission = "0777" 2025-09-16 00:01:32.337670 | orchestrator | 00:01:32.337 STDOUT terraform:  + file_permission = "0644" 2025-09-16 00:01:32.337741 | orchestrator | 00:01:32.337 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-16 00:01:32.337751 | orchestrator | 00:01:32.337 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.337760 | orchestrator | 00:01:32.337 STDOUT terraform:  } 2025-09-16 00:01:32.337782 | orchestrator | 00:01:32.337 STDOUT terraform:  # local_file.inventory will be created 2025-09-16 00:01:32.337809 | orchestrator | 00:01:32.337 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-16 00:01:32.337843 | orchestrator | 00:01:32.337 STDOUT terraform:  + content = (known after apply) 2025-09-16 00:01:32.337879 | orchestrator | 00:01:32.337 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-16 00:01:32.337915 | orchestrator | 00:01:32.337 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-16 00:01:32.337952 | orchestrator | 00:01:32.337 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-16 00:01:32.337990 | orchestrator | 00:01:32.337 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-16 00:01:32.338060 | orchestrator | 00:01:32.337 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-16 00:01:32.338094 | orchestrator | 00:01:32.338 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-16 00:01:32.338102 | orchestrator | 00:01:32.338 STDOUT terraform:  + directory_permission = "0777" 2025-09-16 00:01:32.338131 | orchestrator | 00:01:32.338 STDOUT terraform:  + file_permission = "0644" 2025-09-16 00:01:32.338162 | orchestrator | 00:01:32.338 STDOUT terraform:  + filename = "inventory.ci" 2025-09-16 00:01:32.338199 | orchestrator | 00:01:32.338 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.338206 | orchestrator | 00:01:32.338 STDOUT terraform:  } 2025-09-16 00:01:32.338240 | orchestrator | 00:01:32.338 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-16 00:01:32.338271 | orchestrator | 00:01:32.338 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-16 00:01:32.338306 | orchestrator | 00:01:32.338 STDOUT terraform:  + content = (sensitive value) 2025-09-16 00:01:32.338349 | orchestrator | 00:01:32.338 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-16 00:01:32.338379 | orchestrator | 00:01:32.338 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-16 00:01:32.338416 | orchestrator | 00:01:32.338 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-16 00:01:32.338455 | orchestrator | 00:01:32.338 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-16 00:01:32.338502 | orchestrator | 00:01:32.338 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-16 00:01:32.338534 | orchestrator | 00:01:32.338 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-16 00:01:32.338555 | orchestrator | 00:01:32.338 STDOUT terraform:  + directory_permission = "0700" 2025-09-16 00:01:32.338581 | orchestrator | 00:01:32.338 STDOUT terraform:  + file_permission = "0600" 2025-09-16 00:01:32.338613 | orchestrator | 00:01:32.338 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-16 00:01:32.338652 | orchestrator | 00:01:32.338 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.338659 | orchestrator | 00:01:32.338 STDOUT terraform:  } 2025-09-16 00:01:32.338693 | orchestrator | 00:01:32.338 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-16 00:01:32.338753 | orchestrator | 00:01:32.338 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-16 00:01:32.338777 | orchestrator | 00:01:32.338 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.338793 | orchestrator | 00:01:32.338 STDOUT terraform:  } 2025-09-16 00:01:32.338890 | orchestrator | 00:01:32.338 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-16 00:01:32.338943 | orchestrator | 00:01:32.338 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-16 00:01:32.338976 | orchestrator | 00:01:32.338 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.339001 | orchestrator | 00:01:32.338 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.339039 | orchestrator | 00:01:32.338 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.339076 | orchestrator | 00:01:32.339 STDOUT terraform:  + image_id = (known after apply) 2025-09-16 00:01:32.339113 | orchestrator | 00:01:32.339 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.339160 | orchestrator | 00:01:32.339 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-16 00:01:32.339202 | orchestrator | 00:01:32.339 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.339228 | orchestrator | 00:01:32.339 STDOUT terraform:  + size = 80 2025-09-16 00:01:32.339252 | orchestrator | 00:01:32.339 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.339276 | orchestrator | 00:01:32.339 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.339284 | orchestrator | 00:01:32.339 STDOUT terraform:  } 2025-09-16 00:01:32.339332 | orchestrator | 00:01:32.339 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-16 00:01:32.339376 | orchestrator | 00:01:32.339 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-16 00:01:32.339408 | orchestrator | 00:01:32.339 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.339431 | orchestrator | 00:01:32.339 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.339466 | orchestrator | 00:01:32.339 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.339499 | orchestrator | 00:01:32.339 STDOUT terraform:  + image_id = (known after apply) 2025-09-16 00:01:32.339534 | orchestrator | 00:01:32.339 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.339578 | orchestrator | 00:01:32.339 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-16 00:01:32.339611 | orchestrator | 00:01:32.339 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.339631 | orchestrator | 00:01:32.339 STDOUT terraform:  + size = 80 2025-09-16 00:01:32.339654 | orchestrator | 00:01:32.339 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.339677 | orchestrator | 00:01:32.339 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.339684 | orchestrator | 00:01:32.339 STDOUT terraform:  } 2025-09-16 00:01:32.339744 | orchestrator | 00:01:32.339 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-16 00:01:32.339787 | orchestrator | 00:01:32.339 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-16 00:01:32.339821 | orchestrator | 00:01:32.339 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.339844 | orchestrator | 00:01:32.339 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.339880 | orchestrator | 00:01:32.339 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.339913 | orchestrator | 00:01:32.339 STDOUT terraform:  + image_id = (known after apply) 2025-09-16 00:01:32.339949 | orchestrator | 00:01:32.339 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.339991 | orchestrator | 00:01:32.339 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-16 00:01:32.340024 | orchestrator | 00:01:32.339 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.340044 | orchestrator | 00:01:32.340 STDOUT terraform:  + size = 80 2025-09-16 00:01:32.340070 | orchestrator | 00:01:32.340 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.340118 | orchestrator | 00:01:32.340 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.340338 | orchestrator | 00:01:32.340 STDOUT terraform:  } 2025-09-16 00:01:32.340522 | orchestrator | 00:01:32.340 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-16 00:01:32.340644 | orchestrator | 00:01:32.340 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-16 00:01:32.340679 | orchestrator | 00:01:32.340 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.340983 | orchestrator | 00:01:32.340 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.341106 | orchestrator | 00:01:32.340 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.341210 | orchestrator | 00:01:32.340 STDOUT terraform:  + image_id = (known after apply) 2025-09-16 00:01:32.341470 | orchestrator | 00:01:32.340 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.341872 | orchestrator | 00:01:32.340 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-16 00:01:32.341980 | orchestrator | 00:01:32.340 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.342336 | orchestrator | 00:01:32.340 STDOUT terraform:  + size = 80 2025-09-16 00:01:32.342684 | orchestrator | 00:01:32.340 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.343088 | orchestrator | 00:01:32.340 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.343221 | orchestrator | 00:01:32.340 STDOUT terraform:  } 2025-09-16 00:01:32.343231 | orchestrator | 00:01:32.340 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-16 00:01:32.343236 | orchestrator | 00:01:32.340 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-16 00:01:32.343242 | orchestrator | 00:01:32.340 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.343247 | orchestrator | 00:01:32.340 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.343252 | orchestrator | 00:01:32.340 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.343257 | orchestrator | 00:01:32.340 STDOUT terraform:  + image_id = (known after apply) 2025-09-16 00:01:32.343261 | orchestrator | 00:01:32.340 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.343276 | orchestrator | 00:01:32.340 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-16 00:01:32.343281 | orchestrator | 00:01:32.340 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.343285 | orchestrator | 00:01:32.340 STDOUT terraform:  + size = 80 2025-09-16 00:01:32.343290 | orchestrator | 00:01:32.340 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.343295 | orchestrator | 00:01:32.340 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.343299 | orchestrator | 00:01:32.340 STDOUT terraform:  } 2025-09-16 00:01:32.343304 | orchestrator | 00:01:32.340 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-16 00:01:32.343313 | orchestrator | 00:01:32.340 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-16 00:01:32.343318 | orchestrator | 00:01:32.340 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.343322 | orchestrator | 00:01:32.340 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.343327 | orchestrator | 00:01:32.340 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.343331 | orchestrator | 00:01:32.340 STDOUT terraform:  + image_id = (known after apply) 2025-09-16 00:01:32.343336 | orchestrator | 00:01:32.340 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.343341 | orchestrator | 00:01:32.341 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-16 00:01:32.343345 | orchestrator | 00:01:32.341 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.343350 | orchestrator | 00:01:32.341 STDOUT terraform:  + size = 80 2025-09-16 00:01:32.343355 | orchestrator | 00:01:32.341 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.343359 | orchestrator | 00:01:32.341 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.343364 | orchestrator | 00:01:32.341 STDOUT terraform:  } 2025-09-16 00:01:32.343369 | orchestrator | 00:01:32.341 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-16 00:01:32.343373 | orchestrator | 00:01:32.341 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-16 00:01:32.343378 | orchestrator | 00:01:32.341 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.343393 | orchestrator | 00:01:32.341 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.343398 | orchestrator | 00:01:32.341 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.343402 | orchestrator | 00:01:32.341 STDOUT terraform:  + image_id = (known after apply) 2025-09-16 00:01:32.343407 | orchestrator | 00:01:32.341 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.343411 | orchestrator | 00:01:32.341 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-16 00:01:32.343416 | orchestrator | 00:01:32.341 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.343420 | orchestrator | 00:01:32.341 STDOUT terraform:  + size = 80 2025-09-16 00:01:32.343429 | orchestrator | 00:01:32.341 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.343433 | orchestrator | 00:01:32.341 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.343438 | orchestrator | 00:01:32.341 STDOUT terraform:  } 2025-09-16 00:01:32.343442 | orchestrator | 00:01:32.341 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-16 00:01:32.343447 | orchestrator | 00:01:32.341 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-16 00:01:32.343454 | orchestrator | 00:01:32.341 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.343459 | orchestrator | 00:01:32.341 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.343464 | orchestrator | 00:01:32.341 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.343468 | orchestrator | 00:01:32.341 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.343473 | orchestrator | 00:01:32.341 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-16 00:01:32.343477 | orchestrator | 00:01:32.341 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.343482 | orchestrator | 00:01:32.341 STDOUT terraform:  + size = 20 2025-09-16 00:01:32.343486 | orchestrator | 00:01:32.341 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.343491 | orchestrator | 00:01:32.341 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.343495 | orchestrator | 00:01:32.341 STDOUT terraform:  } 2025-09-16 00:01:32.343500 | orchestrator | 00:01:32.341 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-16 00:01:32.343504 | orchestrator | 00:01:32.341 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-16 00:01:32.343509 | orchestrator | 00:01:32.341 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.343513 | orchestrator | 00:01:32.341 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.343518 | orchestrator | 00:01:32.341 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.343522 | orchestrator | 00:01:32.341 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.343527 | orchestrator | 00:01:32.341 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-16 00:01:32.343531 | orchestrator | 00:01:32.342 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.343537 | orchestrator | 00:01:32.342 STDOUT terraform:  + size = 20 2025-09-16 00:01:32.343542 | orchestrator | 00:01:32.342 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.343546 | orchestrator | 00:01:32.342 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.343551 | orchestrator | 00:01:32.342 STDOUT terraform:  } 2025-09-16 00:01:32.343555 | orchestrator | 00:01:32.342 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-16 00:01:32.343560 | orchestrator | 00:01:32.342 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-16 00:01:32.343564 | orchestrator | 00:01:32.342 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.343572 | orchestrator | 00:01:32.342 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.343580 | orchestrator | 00:01:32.342 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.343585 | orchestrator | 00:01:32.342 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.343589 | orchestrator | 00:01:32.342 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-16 00:01:32.343594 | orchestrator | 00:01:32.342 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.343598 | orchestrator | 00:01:32.342 STDOUT terraform:  + size = 20 2025-09-16 00:01:32.343609 | orchestrator | 00:01:32.342 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.343613 | orchestrator | 00:01:32.342 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.343618 | orchestrator | 00:01:32.342 STDOUT terraform:  } 2025-09-16 00:01:32.343622 | orchestrator | 00:01:32.342 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-16 00:01:32.343627 | orchestrator | 00:01:32.342 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-16 00:01:32.343631 | orchestrator | 00:01:32.342 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.343636 | orchestrator | 00:01:32.342 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.343640 | orchestrator | 00:01:32.342 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.343645 | orchestrator | 00:01:32.342 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.343649 | orchestrator | 00:01:32.342 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-16 00:01:32.343654 | orchestrator | 00:01:32.342 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.343658 | orchestrator | 00:01:32.342 STDOUT terraform:  + size = 20 2025-09-16 00:01:32.343663 | orchestrator | 00:01:32.342 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.343667 | orchestrator | 00:01:32.342 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.343672 | orchestrator | 00:01:32.342 STDOUT terraform:  } 2025-09-16 00:01:32.343677 | orchestrator | 00:01:32.342 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-16 00:01:32.343681 | orchestrator | 00:01:32.342 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-16 00:01:32.343685 | orchestrator | 00:01:32.343 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.343690 | orchestrator | 00:01:32.343 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.343695 | orchestrator | 00:01:32.343 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.343699 | orchestrator | 00:01:32.343 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.343717 | orchestrator | 00:01:32.343 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-16 00:01:32.343721 | orchestrator | 00:01:32.343 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.343729 | orchestrator | 00:01:32.343 STDOUT terraform:  + size = 20 2025-09-16 00:01:32.343734 | orchestrator | 00:01:32.343 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.343739 | orchestrator | 00:01:32.343 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.343743 | orchestrator | 00:01:32.343 STDOUT terraform:  } 2025-09-16 00:01:32.343748 | orchestrator | 00:01:32.343 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-16 00:01:32.343752 | orchestrator | 00:01:32.343 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-16 00:01:32.343757 | orchestrator | 00:01:32.343 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.343761 | orchestrator | 00:01:32.343 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.343766 | orchestrator | 00:01:32.343 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.343777 | orchestrator | 00:01:32.343 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.343782 | orchestrator | 00:01:32.343 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-16 00:01:32.343787 | orchestrator | 00:01:32.343 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.343795 | orchestrator | 00:01:32.343 STDOUT terraform:  + size = 20 2025-09-16 00:01:32.343799 | orchestrator | 00:01:32.343 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.343804 | orchestrator | 00:01:32.343 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.343808 | orchestrator | 00:01:32.343 STDOUT terraform:  } 2025-09-16 00:01:32.343813 | orchestrator | 00:01:32.343 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-16 00:01:32.343817 | orchestrator | 00:01:32.343 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-16 00:01:32.343822 | orchestrator | 00:01:32.343 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.343827 | orchestrator | 00:01:32.343 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.343831 | orchestrator | 00:01:32.343 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.343836 | orchestrator | 00:01:32.343 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.343842 | orchestrator | 00:01:32.343 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-16 00:01:32.343847 | orchestrator | 00:01:32.343 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.343852 | orchestrator | 00:01:32.343 STDOUT terraform:  + size = 20 2025-09-16 00:01:32.343858 | orchestrator | 00:01:32.343 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.343864 | orchestrator | 00:01:32.343 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.343871 | orchestrator | 00:01:32.343 STDOUT terraform:  } 2025-09-16 00:01:32.344614 | orchestrator | 00:01:32.343 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-16 00:01:32.344624 | orchestrator | 00:01:32.343 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-16 00:01:32.344633 | orchestrator | 00:01:32.343 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.344638 | orchestrator | 00:01:32.343 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.344642 | orchestrator | 00:01:32.344 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.344647 | orchestrator | 00:01:32.344 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.344651 | orchestrator | 00:01:32.344 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-16 00:01:32.344656 | orchestrator | 00:01:32.344 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.344660 | orchestrator | 00:01:32.344 STDOUT terraform:  + size = 20 2025-09-16 00:01:32.344665 | orchestrator | 00:01:32.344 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.344669 | orchestrator | 00:01:32.344 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.344674 | orchestrator | 00:01:32.344 STDOUT terraform:  } 2025-09-16 00:01:32.344679 | orchestrator | 00:01:32.344 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-16 00:01:32.344683 | orchestrator | 00:01:32.344 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-16 00:01:32.344688 | orchestrator | 00:01:32.344 STDOUT terraform:  + attachment = (known after apply) 2025-09-16 00:01:32.344692 | orchestrator | 00:01:32.344 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.344697 | orchestrator | 00:01:32.344 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.344735 | orchestrator | 00:01:32.344 STDOUT terraform:  + metadata = (known after apply) 2025-09-16 00:01:32.344741 | orchestrator | 00:01:32.344 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-16 00:01:32.344745 | orchestrator | 00:01:32.344 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.344750 | orchestrator | 00:01:32.344 STDOUT terraform:  + size = 20 2025-09-16 00:01:32.344754 | orchestrator | 00:01:32.344 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-16 00:01:32.344759 | orchestrator | 00:01:32.344 STDOUT terraform:  + volume_type = "ssd" 2025-09-16 00:01:32.344763 | orchestrator | 00:01:32.344 STDOUT terraform:  } 2025-09-16 00:01:32.344771 | orchestrator | 00:01:32.344 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-16 00:01:32.344776 | orchestrator | 00:01:32.344 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-16 00:01:32.344782 | orchestrator | 00:01:32.344 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-16 00:01:32.344787 | orchestrator | 00:01:32.344 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-16 00:01:32.344792 | orchestrator | 00:01:32.344 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-16 00:01:32.344796 | orchestrator | 00:01:32.344 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.344801 | orchestrator | 00:01:32.344 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.344810 | orchestrator | 00:01:32.344 STDOUT terraform:  + config_drive = true 2025-09-16 00:01:32.344815 | orchestrator | 00:01:32.344 STDOUT terraform:  + created = (known after apply) 2025-09-16 00:01:32.344819 | orchestrator | 00:01:32.344 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-16 00:01:32.344826 | orchestrator | 00:01:32.344 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-16 00:01:32.344830 | orchestrator | 00:01:32.344 STDOUT terraform:  + force_delete = false 2025-09-16 00:01:32.344910 | orchestrator | 00:01:32.344 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-16 00:01:32.345003 | orchestrator | 00:01:32.344 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.345267 | orchestrator | 00:01:32.344 STDOUT terraform:  + image_id = (known after apply) 2025-09-16 00:01:32.345315 | orchestrator | 00:01:32.344 STDOUT terraform:  + image_name = (known after apply) 2025-09-16 00:01:32.345370 | orchestrator | 00:01:32.344 STDOUT terraform:  + key_pair = "testbed" 2025-09-16 00:01:32.345375 | orchestrator | 00:01:32.344 STDOUT terraform:  + name = "testbed-manager" 2025-09-16 00:01:32.345379 | orchestrator | 00:01:32.344 STDOUT terraform:  + power_state = "active" 2025-09-16 00:01:32.345383 | orchestrator | 00:01:32.345 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.345387 | orchestrator | 00:01:32.345 STDOUT terraform:  + security_groups = (known after apply) 2025-09-16 00:01:32.345391 | orchestrator | 00:01:32.345 STDOUT terraform:  + stop_before_destroy = false 2025-09-16 00:01:32.345395 | orchestrator | 00:01:32.345 STDOUT terraform:  + updated = (known after apply) 2025-09-16 00:01:32.345399 | orchestrator | 00:01:32.345 STDOUT terraform:  + user_data = (sensitive value) 2025-09-16 00:01:32.345403 | orchestrator | 00:01:32.345 STDOUT terraform:  + block_device { 2025-09-16 00:01:32.345408 | orchestrator | 00:01:32.345 STDOUT terraform:  + boot_index = 0 2025-09-16 00:01:32.345412 | orchestrator | 00:01:32.345 STDOUT terraform:  + delete_on_termination = false 2025-09-16 00:01:32.345416 | orchestrator | 00:01:32.345 STDOUT terraform:  + destination_type = "volume" 2025-09-16 00:01:32.345420 | orchestrator | 00:01:32.345 STDOUT terraform:  + multiattach = false 2025-09-16 00:01:32.345426 | orchestrator | 00:01:32.345 STDOUT terraform:  + source_type = "volume" 2025-09-16 00:01:32.345430 | orchestrator | 00:01:32.345 STDOUT terraform:  + uuid = (known after apply) 2025-09-16 00:01:32.345434 | orchestrator | 00:01:32.345 STDOUT terraform:  } 2025-09-16 00:01:32.345439 | orchestrator | 00:01:32.345 STDOUT terraform:  + network { 2025-09-16 00:01:32.345443 | orchestrator | 00:01:32.345 STDOUT terraform:  + access_network = false 2025-09-16 00:01:32.345447 | orchestrator | 00:01:32.345 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-16 00:01:32.345451 | orchestrator | 00:01:32.345 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-16 00:01:32.345455 | orchestrator | 00:01:32.345 STDOUT terraform:  + mac = (known after apply) 2025-09-16 00:01:32.345463 | orchestrator | 00:01:32.345 STDOUT terraform:  + name = (known after apply) 2025-09-16 00:01:32.345469 | orchestrator | 00:01:32.345 STDOUT terraform:  + port = (known after apply) 2025-09-16 00:01:32.345475 | orchestrator | 00:01:32.345 STDOUT terraform:  + uuid = (known after apply) 2025-09-16 00:01:32.348496 | orchestrator | 00:01:32.345 STDOUT terraform:  } 2025-09-16 00:01:32.348518 | orchestrator | 00:01:32.345 STDOUT terraform:  } 2025-09-16 00:01:32.348523 | orchestrator | 00:01:32.345 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-16 00:01:32.348527 | orchestrator | 00:01:32.345 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-16 00:01:32.348531 | orchestrator | 00:01:32.345 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-16 00:01:32.348541 | orchestrator | 00:01:32.345 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-16 00:01:32.348545 | orchestrator | 00:01:32.345 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-16 00:01:32.348549 | orchestrator | 00:01:32.345 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.348553 | orchestrator | 00:01:32.345 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.348557 | orchestrator | 00:01:32.345 STDOUT terraform:  + config_drive = true 2025-09-16 00:01:32.348562 | orchestrator | 00:01:32.345 STDOUT terraform:  + created = (known after apply) 2025-09-16 00:01:32.348566 | orchestrator | 00:01:32.345 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-16 00:01:32.348570 | orchestrator | 00:01:32.345 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-16 00:01:32.348574 | orchestrator | 00:01:32.345 STDOUT terraform:  + force_delete = false 2025-09-16 00:01:32.348578 | orchestrator | 00:01:32.345 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-16 00:01:32.348582 | orchestrator | 00:01:32.345 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.348586 | orchestrator | 00:01:32.345 STDOUT terraform:  + image_id = (known after apply) 2025-09-16 00:01:32.348590 | orchestrator | 00:01:32.345 STDOUT terraform:  + image_name = (known after apply) 2025-09-16 00:01:32.348593 | orchestrator | 00:01:32.345 STDOUT terraform:  + key_pair = "testbed" 2025-09-16 00:01:32.348597 | orchestrator | 00:01:32.345 STDOUT terraform:  + name = "testbed-node-0" 2025-09-16 00:01:32.348601 | orchestrator | 00:01:32.345 STDOUT terraform:  + power_state = "active" 2025-09-16 00:01:32.348605 | orchestrator | 00:01:32.346 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.348609 | orchestrator | 00:01:32.346 STDOUT terraform:  + security_groups = (known after apply) 2025-09-16 00:01:32.348613 | orchestrator | 00:01:32.346 STDOUT terraform:  + stop_before_destroy = false 2025-09-16 00:01:32.348617 | orchestrator | 00:01:32.346 STDOUT terraform:  + updated = (known after apply) 2025-09-16 00:01:32.348621 | orchestrator | 00:01:32.346 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-16 00:01:32.348625 | orchestrator | 00:01:32.346 STDOUT terraform:  + block_device { 2025-09-16 00:01:32.348636 | orchestrator | 00:01:32.346 STDOUT terraform:  + boot_index = 0 2025-09-16 00:01:32.348640 | orchestrator | 00:01:32.346 STDOUT terraform:  + delete_on_termination = false 2025-09-16 00:01:32.348644 | orchestrator | 00:01:32.346 STDOUT terraform:  + destination_type = "volume" 2025-09-16 00:01:32.348648 | orchestrator | 00:01:32.346 STDOUT terraform:  + multiattach = false 2025-09-16 00:01:32.348652 | orchestrator | 00:01:32.346 STDOUT terraform:  + source_type = "volume" 2025-09-16 00:01:32.348656 | orchestrator | 00:01:32.346 STDOUT terraform:  + uuid = (known after apply) 2025-09-16 00:01:32.348659 | orchestrator | 00:01:32.346 STDOUT terraform:  } 2025-09-16 00:01:32.348664 | orchestrator | 00:01:32.346 STDOUT terraform:  + network { 2025-09-16 00:01:32.348668 | orchestrator | 00:01:32.346 STDOUT terraform:  + access_network = false 2025-09-16 00:01:32.348672 | orchestrator | 00:01:32.346 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-16 00:01:32.348675 | orchestrator | 00:01:32.346 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-16 00:01:32.348685 | orchestrator | 00:01:32.346 STDOUT terraform:  + mac = (known after apply) 2025-09-16 00:01:32.348690 | orchestrator | 00:01:32.346 STDOUT terraform:  + name = (known after apply) 2025-09-16 00:01:32.348694 | orchestrator | 00:01:32.346 STDOUT terraform:  + port = (known after apply) 2025-09-16 00:01:32.348698 | orchestrator | 00:01:32.346 STDOUT terraform:  + uuid = (known after apply) 2025-09-16 00:01:32.348740 | orchestrator | 00:01:32.346 STDOUT terraform:  } 2025-09-16 00:01:32.348744 | orchestrator | 00:01:32.346 STDOUT terraform:  } 2025-09-16 00:01:32.348748 | orchestrator | 00:01:32.346 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-16 00:01:32.348752 | orchestrator | 00:01:32.346 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-16 00:01:32.348756 | orchestrator | 00:01:32.346 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-16 00:01:32.348760 | orchestrator | 00:01:32.346 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-16 00:01:32.348763 | orchestrator | 00:01:32.346 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-16 00:01:32.348767 | orchestrator | 00:01:32.346 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.348771 | orchestrator | 00:01:32.346 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.348775 | orchestrator | 00:01:32.346 STDOUT terraform:  + config_drive = true 2025-09-16 00:01:32.348779 | orchestrator | 00:01:32.346 STDOUT terraform:  + created = (known after apply) 2025-09-16 00:01:32.348782 | orchestrator | 00:01:32.346 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-16 00:01:32.348786 | orchestrator | 00:01:32.346 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-16 00:01:32.348790 | orchestrator | 00:01:32.346 STDOUT terraform:  + force_delete = false 2025-09-16 00:01:32.348796 | orchestrator | 00:01:32.346 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-16 00:01:32.348804 | orchestrator | 00:01:32.346 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.348807 | orchestrator | 00:01:32.346 STDOUT terraform:  + image_id = (known after apply) 2025-09-16 00:01:32.348811 | orchestrator | 00:01:32.346 STDOUT terraform:  + image_name = (known after apply) 2025-09-16 00:01:32.348815 | orchestrator | 00:01:32.346 STDOUT terraform:  + key_pair = "testbed" 2025-09-16 00:01:32.348819 | orchestrator | 00:01:32.347 STDOUT terraform:  + name = "testbed-node-1" 2025-09-16 00:01:32.348822 | orchestrator | 00:01:32.347 STDOUT terraform:  + power_state = "active" 2025-09-16 00:01:32.348826 | orchestrator | 00:01:32.347 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.348830 | orchestrator | 00:01:32.347 STDOUT terraform:  + security_groups = (known after apply) 2025-09-16 00:01:32.348834 | orchestrator | 00:01:32.347 STDOUT terraform:  + stop_before_destroy = false 2025-09-16 00:01:32.348838 | orchestrator | 00:01:32.347 STDOUT terraform:  + updated = (known after apply) 2025-09-16 00:01:32.348841 | orchestrator | 00:01:32.347 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-16 00:01:32.348845 | orchestrator | 00:01:32.347 STDOUT terraform:  + block_device { 2025-09-16 00:01:32.348849 | orchestrator | 00:01:32.347 STDOUT terraform:  + boot_index = 0 2025-09-16 00:01:32.348855 | orchestrator | 00:01:32.347 STDOUT terraform:  + delete_on_termination = false 2025-09-16 00:01:32.348859 | orchestrator | 00:01:32.347 STDOUT terraform:  + destination_type = "volume" 2025-09-16 00:01:32.348863 | orchestrator | 00:01:32.347 STDOUT terraform:  + multiattach = false 2025-09-16 00:01:32.348866 | orchestrator | 00:01:32.347 STDOUT terraform:  + source_type = "volume" 2025-09-16 00:01:32.348870 | orchestrator | 00:01:32.347 STDOUT terraform:  + uuid = (known after apply) 2025-09-16 00:01:32.348874 | orchestrator | 00:01:32.347 STDOUT terraform:  } 2025-09-16 00:01:32.348881 | orchestrator | 00:01:32.347 STDOUT terraform:  + network { 2025-09-16 00:01:32.348885 | orchestrator | 00:01:32.347 STDOUT terraform:  + access_network = false 2025-09-16 00:01:32.348889 | orchestrator | 00:01:32.347 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-16 00:01:32.348892 | orchestrator | 00:01:32.347 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-16 00:01:32.348896 | orchestrator | 00:01:32.347 STDOUT terraform:  + mac = (known after apply) 2025-09-16 00:01:32.348900 | orchestrator | 00:01:32.347 STDOUT terraform:  + name = (known after apply) 2025-09-16 00:01:32.348904 | orchestrator | 00:01:32.347 STDOUT terraform:  + port = (known after apply) 2025-09-16 00:01:32.348907 | orchestrator | 00:01:32.347 STDOUT terraform:  + uuid = (known after apply) 2025-09-16 00:01:32.348911 | orchestrator | 00:01:32.347 STDOUT terraform:  } 2025-09-16 00:01:32.348915 | orchestrator | 00:01:32.347 STDOUT terraform:  } 2025-09-16 00:01:32.348919 | orchestrator | 00:01:32.347 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-16 00:01:32.348923 | orchestrator | 00:01:32.347 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-16 00:01:32.348929 | orchestrator | 00:01:32.347 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-16 00:01:32.348933 | orchestrator | 00:01:32.347 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-16 00:01:32.348937 | orchestrator | 00:01:32.347 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-16 00:01:32.348941 | orchestrator | 00:01:32.347 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.348944 | orchestrator | 00:01:32.347 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.348948 | orchestrator | 00:01:32.347 STDOUT terraform:  + config_drive = true 2025-09-16 00:01:32.348952 | orchestrator | 00:01:32.347 STDOUT terraform:  + created = (known after apply) 2025-09-16 00:01:32.348956 | orchestrator | 00:01:32.347 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-16 00:01:32.348960 | orchestrator | 00:01:32.347 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-16 00:01:32.348963 | orchestrator | 00:01:32.347 STDOUT terraform:  + force_delete = false 2025-09-16 00:01:32.348967 | orchestrator | 00:01:32.347 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-16 00:01:32.348971 | orchestrator | 00:01:32.347 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.348975 | orchestrator | 00:01:32.348 STDOUT terraform:  + image_id = (known after apply) 2025-09-16 00:01:32.348978 | orchestrator | 00:01:32.348 STDOUT terraform:  + image_name = (known after apply) 2025-09-16 00:01:32.348982 | orchestrator | 00:01:32.348 STDOUT terraform:  + key_pair = "testbed" 2025-09-16 00:01:32.348986 | orchestrator | 00:01:32.348 STDOUT terraform:  + name = "testbed-node-2" 2025-09-16 00:01:32.348990 | orchestrator | 00:01:32.348 STDOUT terraform:  + power_state = "active" 2025-09-16 00:01:32.348993 | orchestrator | 00:01:32.348 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.348997 | orchestrator | 00:01:32.348 STDOUT terraform:  + security_groups = (known after apply) 2025-09-16 00:01:32.349001 | orchestrator | 00:01:32.348 STDOUT terraform:  + stop_before_destroy = false 2025-09-16 00:01:32.349005 | orchestrator | 00:01:32.348 STDOUT terraform:  + updated = (known after apply) 2025-09-16 00:01:32.349008 | orchestrator | 00:01:32.348 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-16 00:01:32.349012 | orchestrator | 00:01:32.348 STDOUT terraform:  + block_device { 2025-09-16 00:01:32.349093 | orchestrator | 00:01:32.348 STDOUT terraform:  + boot_index = 0 2025-09-16 00:01:32.349141 | orchestrator | 00:01:32.349 STDOUT terraform:  + delete_on_termination = false 2025-09-16 00:01:32.349179 | orchestrator | 00:01:32.349 STDOUT terraform:  + destination_type = "volume" 2025-09-16 00:01:32.349202 | orchestrator | 00:01:32.349 STDOUT terraform:  + multiattach = false 2025-09-16 00:01:32.349231 | orchestrator | 00:01:32.349 STDOUT terraform:  + source_type = "volume" 2025-09-16 00:01:32.349270 | orchestrator | 00:01:32.349 STDOUT terraform:  + uuid = (known after apply) 2025-09-16 00:01:32.349281 | orchestrator | 00:01:32.349 STDOUT terraform:  } 2025-09-16 00:01:32.349288 | orchestrator | 00:01:32.349 STDOUT terraform:  + network { 2025-09-16 00:01:32.349310 | orchestrator | 00:01:32.349 STDOUT terraform:  + access_network = false 2025-09-16 00:01:32.349341 | orchestrator | 00:01:32.349 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-16 00:01:32.349372 | orchestrator | 00:01:32.349 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-16 00:01:32.349402 | orchestrator | 00:01:32.349 STDOUT terraform:  + mac = (known after apply) 2025-09-16 00:01:32.349432 | orchestrator | 00:01:32.349 STDOUT terraform:  + name = (known after apply) 2025-09-16 00:01:32.349462 | orchestrator | 00:01:32.349 STDOUT terraform:  + port = (known after apply) 2025-09-16 00:01:32.349501 | orchestrator | 00:01:32.349 STDOUT terraform:  + uuid = (known after apply) 2025-09-16 00:01:32.349509 | orchestrator | 00:01:32.349 STDOUT terraform:  } 2025-09-16 00:01:32.349526 | orchestrator | 00:01:32.349 STDOUT terraform:  } 2025-09-16 00:01:32.349568 | orchestrator | 00:01:32.349 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-16 00:01:32.349608 | orchestrator | 00:01:32.349 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-16 00:01:32.349641 | orchestrator | 00:01:32.349 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-16 00:01:32.349674 | orchestrator | 00:01:32.349 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-16 00:01:32.349740 | orchestrator | 00:01:32.349 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-16 00:01:32.349749 | orchestrator | 00:01:32.349 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.349770 | orchestrator | 00:01:32.349 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.349790 | orchestrator | 00:01:32.349 STDOUT terraform:  + config_drive = true 2025-09-16 00:01:32.349825 | orchestrator | 00:01:32.349 STDOUT terraform:  + created = (known after apply) 2025-09-16 00:01:32.349859 | orchestrator | 00:01:32.349 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-16 00:01:32.349887 | orchestrator | 00:01:32.349 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-16 00:01:32.349909 | orchestrator | 00:01:32.349 STDOUT terraform:  + force_delete = false 2025-09-16 00:01:32.349943 | orchestrator | 00:01:32.349 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-16 00:01:32.349976 | orchestrator | 00:01:32.349 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.350009 | orchestrator | 00:01:32.349 STDOUT terraform:  + image_id = (known after apply) 2025-09-16 00:01:32.350068 | orchestrator | 00:01:32.350 STDOUT terraform:  + image_name = (known after apply) 2025-09-16 00:01:32.350091 | orchestrator | 00:01:32.350 STDOUT terraform:  + key_pair = "testbed" 2025-09-16 00:01:32.350122 | orchestrator | 00:01:32.350 STDOUT terraform:  + name = "testbed-node-3" 2025-09-16 00:01:32.350146 | orchestrator | 00:01:32.350 STDOUT terraform:  + power_state = "active" 2025-09-16 00:01:32.350210 | orchestrator | 00:01:32.350 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.350218 | orchestrator | 00:01:32.350 STDOUT terraform:  + security_groups = (known after apply) 2025-09-16 00:01:32.350248 | orchestrator | 00:01:32.350 STDOUT terraform:  + stop_before_destroy = false 2025-09-16 00:01:32.350304 | orchestrator | 00:01:32.350 STDOUT terraform:  + updated = (known after apply) 2025-09-16 00:01:32.350312 | orchestrator | 00:01:32.350 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-16 00:01:32.350361 | orchestrator | 00:01:32.350 STDOUT terraform:  + block_device { 2025-09-16 00:01:32.350371 | orchestrator | 00:01:32.350 STDOUT terraform:  + boot_index = 0 2025-09-16 00:01:32.350375 | orchestrator | 00:01:32.350 STDOUT terraform:  + delete_on_termination = false 2025-09-16 00:01:32.350429 | orchestrator | 00:01:32.350 STDOUT terraform:  + destination_type = "volume" 2025-09-16 00:01:32.350435 | orchestrator | 00:01:32.350 STDOUT terraform:  + multiattach = false 2025-09-16 00:01:32.350471 | orchestrator | 00:01:32.350 STDOUT terraform:  + source_type = "volume" 2025-09-16 00:01:32.350503 | orchestrator | 00:01:32.350 STDOUT terraform:  + uuid = (known after apply) 2025-09-16 00:01:32.350508 | orchestrator | 00:01:32.350 STDOUT terraform:  } 2025-09-16 00:01:32.350514 | orchestrator | 00:01:32.350 STDOUT terraform:  + network { 2025-09-16 00:01:32.350519 | orchestrator | 00:01:32.350 STDOUT terraform:  + access_network = false 2025-09-16 00:01:32.350589 | orchestrator | 00:01:32.350 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-16 00:01:32.350599 | orchestrator | 00:01:32.350 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-16 00:01:32.350637 | orchestrator | 00:01:32.350 STDOUT terraform:  + mac = (known after apply) 2025-09-16 00:01:32.350697 | orchestrator | 00:01:32.350 STDOUT terraform:  + name = (known after apply) 2025-09-16 00:01:32.350726 | orchestrator | 00:01:32.350 STDOUT terraform:  + port = (known after apply) 2025-09-16 00:01:32.350761 | orchestrator | 00:01:32.350 STDOUT terraform:  + uuid = (known after apply) 2025-09-16 00:01:32.350768 | orchestrator | 00:01:32.350 STDOUT terraform:  } 2025-09-16 00:01:32.350773 | orchestrator | 00:01:32.350 STDOUT terraform:  } 2025-09-16 00:01:32.350823 | orchestrator | 00:01:32.350 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-16 00:01:32.350880 | orchestrator | 00:01:32.350 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-16 00:01:32.350884 | orchestrator | 00:01:32.350 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-16 00:01:32.350933 | orchestrator | 00:01:32.350 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-16 00:01:32.351050 | orchestrator | 00:01:32.350 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-16 00:01:32.351056 | orchestrator | 00:01:32.350 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.351060 | orchestrator | 00:01:32.350 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.351068 | orchestrator | 00:01:32.350 STDOUT terraform:  + config_drive = true 2025-09-16 00:01:32.351076 | orchestrator | 00:01:32.350 STDOUT terraform:  + created = (known after apply) 2025-09-16 00:01:32.351081 | orchestrator | 00:01:32.351 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-16 00:01:32.351085 | orchestrator | 00:01:32.351 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-16 00:01:32.351146 | orchestrator | 00:01:32.351 STDOUT terraform:  + force_delete = false 2025-09-16 00:01:32.351156 | orchestrator | 00:01:32.351 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-16 00:01:32.351161 | orchestrator | 00:01:32.351 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.351240 | orchestrator | 00:01:32.351 STDOUT terraform:  + image_id = (known after apply) 2025-09-16 00:01:32.351248 | orchestrator | 00:01:32.351 STDOUT terraform:  + image_name = (known after apply) 2025-09-16 00:01:32.351254 | orchestrator | 00:01:32.351 STDOUT terraform:  + key_pair = "testbed" 2025-09-16 00:01:32.351296 | orchestrator | 00:01:32.351 STDOUT terraform:  + name = "testbed-node-4" 2025-09-16 00:01:32.351333 | orchestrator | 00:01:32.351 STDOUT terraform:  + power_state = "active" 2025-09-16 00:01:32.351338 | orchestrator | 00:01:32.351 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.351385 | orchestrator | 00:01:32.351 STDOUT terraform:  + security_groups = (known after apply) 2025-09-16 00:01:32.351390 | orchestrator | 00:01:32.351 STDOUT terraform:  + stop_before_destroy = false 2025-09-16 00:01:32.351443 | orchestrator | 00:01:32.351 STDOUT terraform:  + updated = (known after apply) 2025-09-16 00:01:32.351449 | orchestrator | 00:01:32.351 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-16 00:01:32.351522 | orchestrator | 00:01:32.351 STDOUT terraform:  + block_device { 2025-09-16 00:01:32.351532 | orchestrator | 00:01:32.351 STDOUT terraform:  + boot_index = 0 2025-09-16 00:01:32.351536 | orchestrator | 00:01:32.351 STDOUT terraform:  + delete_on_termination = false 2025-09-16 00:01:32.351540 | orchestrator | 00:01:32.351 STDOUT terraform:  + destination_type = "volume" 2025-09-16 00:01:32.351576 | orchestrator | 00:01:32.351 STDOUT terraform:  + multiattach = false 2025-09-16 00:01:32.351582 | orchestrator | 00:01:32.351 STDOUT terraform:  + source_type = "volume" 2025-09-16 00:01:32.351621 | orchestrator | 00:01:32.351 STDOUT terraform:  + uuid = (known after apply) 2025-09-16 00:01:32.351743 | orchestrator | 00:01:32.351 STDOUT terraform:  } 2025-09-16 00:01:32.351797 | orchestrator | 00:01:32.351 STDOUT terraform:  + network { 2025-09-16 00:01:32.351922 | orchestrator | 00:01:32.351 STDOUT terraform:  + access_network = false 2025-09-16 00:01:32.351983 | orchestrator | 00:01:32.351 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-16 00:01:32.351988 | orchestrator | 00:01:32.351 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-16 00:01:32.351992 | orchestrator | 00:01:32.351 STDOUT terraform:  + mac = (known after apply) 2025-09-16 00:01:32.352001 | orchestrator | 00:01:32.351 STDOUT terraform:  + name = (known after apply) 2025-09-16 00:01:32.352006 | orchestrator | 00:01:32.351 STDOUT terraform:  + port = (known after apply) 2025-09-16 00:01:32.352009 | orchestrator | 00:01:32.351 STDOUT terraform:  + uuid = (known after apply) 2025-09-16 00:01:32.352013 | orchestrator | 00:01:32.351 STDOUT terraform:  } 2025-09-16 00:01:32.352017 | orchestrator | 00:01:32.351 STDOUT terraform:  } 2025-09-16 00:01:32.352021 | orchestrator | 00:01:32.351 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-16 00:01:32.352025 | orchestrator | 00:01:32.351 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-16 00:01:32.352029 | orchestrator | 00:01:32.351 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-16 00:01:32.352032 | orchestrator | 00:01:32.351 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-16 00:01:32.352036 | orchestrator | 00:01:32.351 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-16 00:01:32.352041 | orchestrator | 00:01:32.351 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.352045 | orchestrator | 00:01:32.352 STDOUT terraform:  + availability_zone = "nova" 2025-09-16 00:01:32.352049 | orchestrator | 00:01:32.352 STDOUT terraform:  + config_drive = true 2025-09-16 00:01:32.352120 | orchestrator | 00:01:32.352 STDOUT terraform:  + created = (known after apply) 2025-09-16 00:01:32.352218 | orchestrator | 00:01:32.352 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-16 00:01:32.352269 | orchestrator | 00:01:32.352 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-16 00:01:32.352347 | orchestrator | 00:01:32.352 STDOUT terraform:  + force_delete = false 2025-09-16 00:01:32.352352 | orchestrator | 00:01:32.352 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-16 00:01:32.352357 | orchestrator | 00:01:32.352 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.352361 | orchestrator | 00:01:32.352 STDOUT terraform:  + image_id = (known after apply) 2025-09-16 00:01:32.352365 | orchestrator | 00:01:32.352 STDOUT terraform:  + image_name = (known after apply) 2025-09-16 00:01:32.352369 | orchestrator | 00:01:32.352 STDOUT terraform:  + key_pair = "testbed" 2025-09-16 00:01:32.352372 | orchestrator | 00:01:32.352 STDOUT terraform:  + name = "testbed-node-5" 2025-09-16 00:01:32.352376 | orchestrator | 00:01:32.352 STDOUT terraform:  + power_state = "active" 2025-09-16 00:01:32.352380 | orchestrator | 00:01:32.352 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.352385 | orchestrator | 00:01:32.352 STDOUT terraform:  + security_groups = (known after apply) 2025-09-16 00:01:32.352430 | orchestrator | 00:01:32.352 STDOUT terraform:  + stop_before_destroy = false 2025-09-16 00:01:32.352513 | orchestrator | 00:01:32.352 STDOUT terraform:  + updated = (known after apply) 2025-09-16 00:01:32.352593 | orchestrator | 00:01:32.352 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-16 00:01:32.352597 | orchestrator | 00:01:32.352 STDOUT terraform:  + block_device { 2025-09-16 00:01:32.352605 | orchestrator | 00:01:32.352 STDOUT terraform:  + boot_index = 0 2025-09-16 00:01:32.352610 | orchestrator | 00:01:32.352 STDOUT terraform:  + delete_on_termination = false 2025-09-16 00:01:32.352613 | orchestrator | 00:01:32.352 STDOUT terraform:  + destination_type = "volume" 2025-09-16 00:01:32.352617 | orchestrator | 00:01:32.352 STDOUT terraform:  + multiattach = false 2025-09-16 00:01:32.352621 | orchestrator | 00:01:32.352 STDOUT terraform:  + source_type = "volume" 2025-09-16 00:01:32.352653 | orchestrator | 00:01:32.352 STDOUT terraform:  + uuid = (known after apply) 2025-09-16 00:01:32.352661 | orchestrator | 00:01:32.352 STDOUT terraform:  } 2025-09-16 00:01:32.352665 | orchestrator | 00:01:32.352 STDOUT terraform:  + network { 2025-09-16 00:01:32.352670 | orchestrator | 00:01:32.352 STDOUT terraform:  + access_network = false 2025-09-16 00:01:32.352714 | orchestrator | 00:01:32.352 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-16 00:01:32.352735 | orchestrator | 00:01:32.352 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-16 00:01:32.352765 | orchestrator | 00:01:32.352 STDOUT terraform:  + mac = (known after apply) 2025-09-16 00:01:32.352818 | orchestrator | 00:01:32.352 STDOUT terraform:  + name = (known after apply) 2025-09-16 00:01:32.352908 | orchestrator | 00:01:32.352 STDOUT terraform:  + port = (known after apply) 2025-09-16 00:01:32.352917 | orchestrator | 00:01:32.352 STDOUT terraform:  + uuid = (known after apply) 2025-09-16 00:01:32.352920 | orchestrator | 00:01:32.352 STDOUT terraform:  } 2025-09-16 00:01:32.352924 | orchestrator | 00:01:32.352 STDOUT terraform:  } 2025-09-16 00:01:32.352930 | orchestrator | 00:01:32.352 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-16 00:01:32.352934 | orchestrator | 00:01:32.352 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-16 00:01:32.353029 | orchestrator | 00:01:32.352 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-16 00:01:32.353037 | orchestrator | 00:01:32.352 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.353042 | orchestrator | 00:01:32.352 STDOUT terraform:  + name = "testbed" 2025-09-16 00:01:32.353046 | orchestrator | 00:01:32.352 STDOUT terraform:  + private_key = (sensitive value) 2025-09-16 00:01:32.353050 | orchestrator | 00:01:32.353 STDOUT terraform:  + public_key = (known after apply) 2025-09-16 00:01:32.353055 | orchestrator | 00:01:32.353 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.353096 | orchestrator | 00:01:32.353 STDOUT terraform:  + user_id = (known after apply) 2025-09-16 00:01:32.353169 | orchestrator | 00:01:32.353 STDOUT terraform:  } 2025-09-16 00:01:32.353179 | orchestrator | 00:01:32.353 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-16 00:01:32.353183 | orchestrator | 00:01:32.353 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-16 00:01:32.353231 | orchestrator | 00:01:32.353 STDOUT terraform:  + device = (known after apply) 2025-09-16 00:01:32.353241 | orchestrator | 00:01:32.353 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.353248 | orchestrator | 00:01:32.353 STDOUT terraform:  + instance_id = (known after apply) 2025-09-16 00:01:32.353316 | orchestrator | 00:01:32.353 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.353326 | orchestrator | 00:01:32.353 STDOUT terraform:  + volume_id = (known after apply) 2025-09-16 00:01:32.353330 | orchestrator | 00:01:32.353 STDOUT terraform:  } 2025-09-16 00:01:32.353373 | orchestrator | 00:01:32.353 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-16 00:01:32.353439 | orchestrator | 00:01:32.353 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-16 00:01:32.353444 | orchestrator | 00:01:32.353 STDOUT terraform:  + device = (known after apply) 2025-09-16 00:01:32.353448 | orchestrator | 00:01:32.353 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.353453 | orchestrator | 00:01:32.353 STDOUT terraform:  + instance_id = (known after apply) 2025-09-16 00:01:32.353518 | orchestrator | 00:01:32.353 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.353527 | orchestrator | 00:01:32.353 STDOUT terraform:  + volume_id = (known after apply) 2025-09-16 00:01:32.353531 | orchestrator | 00:01:32.353 STDOUT terraform:  } 2025-09-16 00:01:32.353570 | orchestrator | 00:01:32.353 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-16 00:01:32.353660 | orchestrator | 00:01:32.353 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-16 00:01:32.353757 | orchestrator | 00:01:32.353 STDOUT terraform:  + device = (known after apply) 2025-09-16 00:01:32.353762 | orchestrator | 00:01:32.353 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.353766 | orchestrator | 00:01:32.353 STDOUT terraform:  + instance_id = (known after apply) 2025-09-16 00:01:32.353770 | orchestrator | 00:01:32.353 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.353774 | orchestrator | 00:01:32.353 STDOUT terraform:  + volume_id = (known after apply) 2025-09-16 00:01:32.353778 | orchestrator | 00:01:32.353 STDOUT terraform:  } 2025-09-16 00:01:32.353848 | orchestrator | 00:01:32.353 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-16 00:01:32.353858 | orchestrator | 00:01:32.353 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-16 00:01:32.353862 | orchestrator | 00:01:32.353 STDOUT terraform:  + device = (known after apply) 2025-09-16 00:01:32.353891 | orchestrator | 00:01:32.353 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.353914 | orchestrator | 00:01:32.353 STDOUT terraform:  + instance_id = (known after apply) 2025-09-16 00:01:32.353965 | orchestrator | 00:01:32.353 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.353970 | orchestrator | 00:01:32.353 STDOUT terraform:  + volume_id = (known after apply) 2025-09-16 00:01:32.353974 | orchestrator | 00:01:32.353 STDOUT terraform:  } 2025-09-16 00:01:32.354130 | orchestrator | 00:01:32.353 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-16 00:01:32.354143 | orchestrator | 00:01:32.354 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-16 00:01:32.354147 | orchestrator | 00:01:32.354 STDOUT terraform:  + device = (known after apply) 2025-09-16 00:01:32.354151 | orchestrator | 00:01:32.354 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.354157 | orchestrator | 00:01:32.354 STDOUT terraform:  + instance_id = (known after apply) 2025-09-16 00:01:32.354177 | orchestrator | 00:01:32.354 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.354215 | orchestrator | 00:01:32.354 STDOUT terraform:  + volume_id = (known after apply) 2025-09-16 00:01:32.354220 | orchestrator | 00:01:32.354 STDOUT terraform:  } 2025-09-16 00:01:32.354275 | orchestrator | 00:01:32.354 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-16 00:01:32.354354 | orchestrator | 00:01:32.354 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-16 00:01:32.354469 | orchestrator | 00:01:32.354 STDOUT terraform:  + device = (known after apply) 2025-09-16 00:01:32.354477 | orchestrator | 00:01:32.354 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.354481 | orchestrator | 00:01:32.354 STDOUT terraform:  + instance_id = (known after apply) 2025-09-16 00:01:32.354485 | orchestrator | 00:01:32.354 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.354489 | orchestrator | 00:01:32.354 STDOUT terraform:  + volume_id = (known after apply) 2025-09-16 00:01:32.354492 | orchestrator | 00:01:32.354 STDOUT terraform:  } 2025-09-16 00:01:32.354498 | orchestrator | 00:01:32.354 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-16 00:01:32.354533 | orchestrator | 00:01:32.354 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-16 00:01:32.354597 | orchestrator | 00:01:32.354 STDOUT terraform:  + device = (known after apply) 2025-09-16 00:01:32.354601 | orchestrator | 00:01:32.354 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.354605 | orchestrator | 00:01:32.354 STDOUT terraform:  + instance_id = (known after apply) 2025-09-16 00:01:32.354610 | orchestrator | 00:01:32.354 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.354653 | orchestrator | 00:01:32.354 STDOUT terraform:  + volume_id = (known after apply) 2025-09-16 00:01:32.354659 | orchestrator | 00:01:32.354 STDOUT terraform:  } 2025-09-16 00:01:32.354786 | orchestrator | 00:01:32.354 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-16 00:01:32.354836 | orchestrator | 00:01:32.354 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-16 00:01:32.354946 | orchestrator | 00:01:32.354 STDOUT terraform:  + device = (known after apply) 2025-09-16 00:01:32.355059 | orchestrator | 00:01:32.354 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.355067 | orchestrator | 00:01:32.354 STDOUT terraform:  + instance_id = (known after apply) 2025-09-16 00:01:32.355071 | orchestrator | 00:01:32.354 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.355079 | orchestrator | 00:01:32.354 STDOUT terraform:  + volume_id = (known after apply) 2025-09-16 00:01:32.355083 | orchestrator | 00:01:32.354 STDOUT terraform:  } 2025-09-16 00:01:32.355087 | orchestrator | 00:01:32.354 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-16 00:01:32.355093 | orchestrator | 00:01:32.354 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-16 00:01:32.355104 | orchestrator | 00:01:32.354 STDOUT terraform:  + device = (known after apply) 2025-09-16 00:01:32.355108 | orchestrator | 00:01:32.354 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.355112 | orchestrator | 00:01:32.354 STDOUT terraform:  + instance_id = (known after apply) 2025-09-16 00:01:32.355116 | orchestrator | 00:01:32.355 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.355119 | orchestrator | 00:01:32.355 STDOUT terraform:  + volume_id = (known after apply) 2025-09-16 00:01:32.355123 | orchestrator | 00:01:32.355 STDOUT terraform:  } 2025-09-16 00:01:32.355131 | orchestrator | 00:01:32.355 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-16 00:01:32.355202 | orchestrator | 00:01:32.355 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-16 00:01:32.355289 | orchestrator | 00:01:32.355 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-16 00:01:32.355294 | orchestrator | 00:01:32.355 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-16 00:01:32.355298 | orchestrator | 00:01:32.355 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.355351 | orchestrator | 00:01:32.355 STDOUT terraform:  + port_id = (known after apply) 2025-09-16 00:01:32.355444 | orchestrator | 00:01:32.355 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.355477 | orchestrator | 00:01:32.355 STDOUT terraform:  } 2025-09-16 00:01:32.355537 | orchestrator | 00:01:32.355 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-16 00:01:32.355545 | orchestrator | 00:01:32.355 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-16 00:01:32.355550 | orchestrator | 00:01:32.355 STDOUT terraform:  + address = (known after apply) 2025-09-16 00:01:32.355555 | orchestrator | 00:01:32.355 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.355559 | orchestrator | 00:01:32.355 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-16 00:01:32.355563 | orchestrator | 00:01:32.355 STDOUT terraform:  + dns_name = (known after apply) 2025-09-16 00:01:32.355567 | orchestrator | 00:01:32.355 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-16 00:01:32.355570 | orchestrator | 00:01:32.355 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.355574 | orchestrator | 00:01:32.355 STDOUT terraform:  + pool = "public" 2025-09-16 00:01:32.355579 | orchestrator | 00:01:32.355 STDOUT terraform:  + port_id = (known after apply) 2025-09-16 00:01:32.355582 | orchestrator | 00:01:32.355 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.355588 | orchestrator | 00:01:32.355 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-16 00:01:32.355595 | orchestrator | 00:01:32.355 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.355600 | orchestrator | 00:01:32.355 STDOUT terraform:  } 2025-09-16 00:01:32.355645 | orchestrator | 00:01:32.355 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-16 00:01:32.355736 | orchestrator | 00:01:32.355 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-16 00:01:32.355742 | orchestrator | 00:01:32.355 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-16 00:01:32.355747 | orchestrator | 00:01:32.355 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.355803 | orchestrator | 00:01:32.355 STDOUT terraform:  + availability_zone_hints = [ 2025-09-16 00:01:32.355808 | orchestrator | 00:01:32.355 STDOUT terraform:  + "nova", 2025-09-16 00:01:32.355812 | orchestrator | 00:01:32.355 STDOUT terraform:  ] 2025-09-16 00:01:32.355818 | orchestrator | 00:01:32.355 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-16 00:01:32.355856 | orchestrator | 00:01:32.355 STDOUT terraform:  + external = (known after apply) 2025-09-16 00:01:32.355889 | orchestrator | 00:01:32.355 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.355926 | orchestrator | 00:01:32.355 STDOUT terraform:  + mtu = (known after apply) 2025-09-16 00:01:32.355986 | orchestrator | 00:01:32.355 STDOUT terraform:  + name = "net-testbed-management" 2025-09-16 00:01:32.355993 | orchestrator | 00:01:32.355 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-16 00:01:32.356064 | orchestrator | 00:01:32.355 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-16 00:01:32.356070 | orchestrator | 00:01:32.356 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.356098 | orchestrator | 00:01:32.356 STDOUT terraform:  + shared = (known after apply) 2025-09-16 00:01:32.356167 | orchestrator | 00:01:32.356 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.356176 | orchestrator | 00:01:32.356 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-16 00:01:32.356182 | orchestrator | 00:01:32.356 STDOUT terraform:  + segments (known after apply) 2025-09-16 00:01:32.356186 | orchestrator | 00:01:32.356 STDOUT terraform:  } 2025-09-16 00:01:32.356260 | orchestrator | 00:01:32.356 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-16 00:01:32.356272 | orchestrator | 00:01:32.356 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-16 00:01:32.356302 | orchestrator | 00:01:32.356 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-16 00:01:32.356417 | orchestrator | 00:01:32.356 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-16 00:01:32.356423 | orchestrator | 00:01:32.356 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-16 00:01:32.356427 | orchestrator | 00:01:32.356 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.356433 | orchestrator | 00:01:32.356 STDOUT terraform:  + device_id = (known after apply) 2025-09-16 00:01:32.356459 | orchestrator | 00:01:32.356 STDOUT terraform:  + device_owner = (known after apply) 2025-09-16 00:01:32.356526 | orchestrator | 00:01:32.356 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-16 00:01:32.356535 | orchestrator | 00:01:32.356 STDOUT terraform:  + dns_name = (known after apply) 2025-09-16 00:01:32.356588 | orchestrator | 00:01:32.356 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.356595 | orchestrator | 00:01:32.356 STDOUT terraform:  + mac_address = (known after apply) 2025-09-16 00:01:32.356700 | orchestrator | 00:01:32.356 STDOUT terraform:  + network_id = (known after apply) 2025-09-16 00:01:32.356714 | orchestrator | 00:01:32.356 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-16 00:01:32.356718 | orchestrator | 00:01:32.356 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-16 00:01:32.356724 | orchestrator | 00:01:32.356 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.356801 | orchestrator | 00:01:32.356 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-16 00:01:32.356810 | orchestrator | 00:01:32.356 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.356816 | orchestrator | 00:01:32.356 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.356943 | orchestrator | 00:01:32.356 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-16 00:01:32.356999 | orchestrator | 00:01:32.356 STDOUT terraform:  } 2025-09-16 00:01:32.357069 | orchestrator | 00:01:32.356 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.357134 | orchestrator | 00:01:32.356 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-16 00:01:32.357139 | orchestrator | 00:01:32.356 STDOUT terraform:  } 2025-09-16 00:01:32.357143 | orchestrator | 00:01:32.356 STDOUT terraform:  + binding (known after apply) 2025-09-16 00:01:32.357147 | orchestrator | 00:01:32.356 STDOUT terraform:  + fixed_ip { 2025-09-16 00:01:32.357151 | orchestrator | 00:01:32.356 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-16 00:01:32.357154 | orchestrator | 00:01:32.356 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-16 00:01:32.357158 | orchestrator | 00:01:32.356 STDOUT terraform:  } 2025-09-16 00:01:32.357162 | orchestrator | 00:01:32.356 STDOUT terraform:  } 2025-09-16 00:01:32.357166 | orchestrator | 00:01:32.356 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-16 00:01:32.357170 | orchestrator | 00:01:32.357 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-16 00:01:32.357174 | orchestrator | 00:01:32.357 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-16 00:01:32.357178 | orchestrator | 00:01:32.357 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-16 00:01:32.357183 | orchestrator | 00:01:32.357 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-16 00:01:32.357187 | orchestrator | 00:01:32.357 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.357209 | orchestrator | 00:01:32.357 STDOUT terraform:  + device_id = (known after apply) 2025-09-16 00:01:32.357337 | orchestrator | 00:01:32.357 STDOUT terraform:  + device_owner = (known after apply) 2025-09-16 00:01:32.357345 | orchestrator | 00:01:32.357 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-16 00:01:32.357349 | orchestrator | 00:01:32.357 STDOUT terraform:  + dns_name = (known after apply) 2025-09-16 00:01:32.357355 | orchestrator | 00:01:32.357 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.357362 | orchestrator | 00:01:32.357 STDOUT terraform:  + mac_address = (known after apply) 2025-09-16 00:01:32.357402 | orchestrator | 00:01:32.357 STDOUT terraform:  + network_id = (known after apply) 2025-09-16 00:01:32.357489 | orchestrator | 00:01:32.357 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-16 00:01:32.357533 | orchestrator | 00:01:32.357 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-16 00:01:32.357539 | orchestrator | 00:01:32.357 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.357543 | orchestrator | 00:01:32.357 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-16 00:01:32.357547 | orchestrator | 00:01:32.357 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.357552 | orchestrator | 00:01:32.357 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.357610 | orchestrator | 00:01:32.357 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-16 00:01:32.357655 | orchestrator | 00:01:32.357 STDOUT terraform:  } 2025-09-16 00:01:32.357659 | orchestrator | 00:01:32.357 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.357665 | orchestrator | 00:01:32.357 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-16 00:01:32.357669 | orchestrator | 00:01:32.357 STDOUT terraform:  } 2025-09-16 00:01:32.357672 | orchestrator | 00:01:32.357 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.357678 | orchestrator | 00:01:32.357 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-16 00:01:32.357682 | orchestrator | 00:01:32.357 STDOUT terraform:  } 2025-09-16 00:01:32.357777 | orchestrator | 00:01:32.357 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.357783 | orchestrator | 00:01:32.357 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-16 00:01:32.357787 | orchestrator | 00:01:32.357 STDOUT terraform:  } 2025-09-16 00:01:32.357791 | orchestrator | 00:01:32.357 STDOUT terraform:  + binding (known after apply) 2025-09-16 00:01:32.357797 | orchestrator | 00:01:32.357 STDOUT terraform:  + fixed_ip { 2025-09-16 00:01:32.357801 | orchestrator | 00:01:32.357 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-16 00:01:32.357846 | orchestrator | 00:01:32.357 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-16 00:01:32.357910 | orchestrator | 00:01:32.357 STDOUT terraform:  } 2025-09-16 00:01:32.357914 | orchestrator | 00:01:32.357 STDOUT terraform:  } 2025-09-16 00:01:32.357920 | orchestrator | 00:01:32.357 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-16 00:01:32.357926 | orchestrator | 00:01:32.357 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-16 00:01:32.357979 | orchestrator | 00:01:32.357 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-16 00:01:32.357985 | orchestrator | 00:01:32.357 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-16 00:01:32.358081 | orchestrator | 00:01:32.357 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-16 00:01:32.358181 | orchestrator | 00:01:32.358 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.358188 | orchestrator | 00:01:32.358 STDOUT terraform:  + device_id = (known after apply) 2025-09-16 00:01:32.358192 | orchestrator | 00:01:32.358 STDOUT terraform:  + device_owner = (known after apply) 2025-09-16 00:01:32.358196 | orchestrator | 00:01:32.358 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-16 00:01:32.358200 | orchestrator | 00:01:32.358 STDOUT terraform:  + dns_name = (known after apply) 2025-09-16 00:01:32.358896 | orchestrator | 00:01:32.358 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.358903 | orchestrator | 00:01:32.358 STDOUT terraform:  + mac_address = (known after apply) 2025-09-16 00:01:32.358907 | orchestrator | 00:01:32.358 STDOUT terraform:  + network_id = (known after apply) 2025-09-16 00:01:32.358911 | orchestrator | 00:01:32.358 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-16 00:01:32.358915 | orchestrator | 00:01:32.358 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-16 00:01:32.358919 | orchestrator | 00:01:32.358 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.358922 | orchestrator | 00:01:32.358 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-16 00:01:32.358926 | orchestrator | 00:01:32.358 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.358930 | orchestrator | 00:01:32.358 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.358934 | orchestrator | 00:01:32.358 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-16 00:01:32.358938 | orchestrator | 00:01:32.358 STDOUT terraform:  } 2025-09-16 00:01:32.358942 | orchestrator | 00:01:32.358 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.358946 | orchestrator | 00:01:32.358 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-16 00:01:32.358950 | orchestrator | 00:01:32.358 STDOUT terraform:  } 2025-09-16 00:01:32.358953 | orchestrator | 00:01:32.358 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.358957 | orchestrator | 00:01:32.358 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-16 00:01:32.358961 | orchestrator | 00:01:32.358 STDOUT terraform:  } 2025-09-16 00:01:32.358965 | orchestrator | 00:01:32.358 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.358969 | orchestrator | 00:01:32.358 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-16 00:01:32.358973 | orchestrator | 00:01:32.358 STDOUT terraform:  } 2025-09-16 00:01:32.358976 | orchestrator | 00:01:32.358 STDOUT terraform:  + binding (known after apply) 2025-09-16 00:01:32.358980 | orchestrator | 00:01:32.358 STDOUT terraform:  + fixed_ip { 2025-09-16 00:01:32.358989 | orchestrator | 00:01:32.358 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-16 00:01:32.358993 | orchestrator | 00:01:32.358 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-16 00:01:32.358997 | orchestrator | 00:01:32.358 STDOUT terraform:  } 2025-09-16 00:01:32.359001 | orchestrator | 00:01:32.358 STDOUT terraform:  } 2025-09-16 00:01:32.359299 | orchestrator | 00:01:32.358 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-16 00:01:32.362049 | orchestrator | 00:01:32.359 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-16 00:01:32.362057 | orchestrator | 00:01:32.359 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-16 00:01:32.362061 | orchestrator | 00:01:32.359 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-16 00:01:32.362065 | orchestrator | 00:01:32.359 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-16 00:01:32.362068 | orchestrator | 00:01:32.359 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.362072 | orchestrator | 00:01:32.359 STDOUT terraform:  + device_id = (known after apply) 2025-09-16 00:01:32.362076 | orchestrator | 00:01:32.359 STDOUT terraform:  + device_owner = (known after apply) 2025-09-16 00:01:32.362080 | orchestrator | 00:01:32.359 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-16 00:01:32.362083 | orchestrator | 00:01:32.359 STDOUT terraform:  + dns_name = (known after apply) 2025-09-16 00:01:32.362087 | orchestrator | 00:01:32.359 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.362091 | orchestrator | 00:01:32.359 STDOUT terraform:  + mac_address = (known after apply) 2025-09-16 00:01:32.362095 | orchestrator | 00:01:32.359 STDOUT terraform:  + network_id = (known after apply) 2025-09-16 00:01:32.362099 | orchestrator | 00:01:32.359 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-16 00:01:32.362102 | orchestrator | 00:01:32.359 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-16 00:01:32.362106 | orchestrator | 00:01:32.359 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.362110 | orchestrator | 00:01:32.359 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-16 00:01:32.362114 | orchestrator | 00:01:32.359 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.362117 | orchestrator | 00:01:32.359 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362121 | orchestrator | 00:01:32.359 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-16 00:01:32.362125 | orchestrator | 00:01:32.359 STDOUT terraform:  } 2025-09-16 00:01:32.362129 | orchestrator | 00:01:32.359 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362133 | orchestrator | 00:01:32.359 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-16 00:01:32.362136 | orchestrator | 00:01:32.359 STDOUT terraform:  } 2025-09-16 00:01:32.362140 | orchestrator | 00:01:32.359 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362144 | orchestrator | 00:01:32.359 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-16 00:01:32.362152 | orchestrator | 00:01:32.359 STDOUT terraform:  } 2025-09-16 00:01:32.362156 | orchestrator | 00:01:32.359 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362160 | orchestrator | 00:01:32.360 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-16 00:01:32.362164 | orchestrator | 00:01:32.360 STDOUT terraform:  } 2025-09-16 00:01:32.362167 | orchestrator | 00:01:32.360 STDOUT terraform:  + binding (known after apply) 2025-09-16 00:01:32.362171 | orchestrator | 00:01:32.360 STDOUT terraform:  + fixed_ip { 2025-09-16 00:01:32.362175 | orchestrator | 00:01:32.360 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-16 00:01:32.362179 | orchestrator | 00:01:32.360 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-16 00:01:32.362183 | orchestrator | 00:01:32.360 STDOUT terraform:  } 2025-09-16 00:01:32.362186 | orchestrator | 00:01:32.360 STDOUT terraform:  } 2025-09-16 00:01:32.362190 | orchestrator | 00:01:32.360 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-16 00:01:32.362194 | orchestrator | 00:01:32.360 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-16 00:01:32.362198 | orchestrator | 00:01:32.360 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-16 00:01:32.362206 | orchestrator | 00:01:32.360 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-16 00:01:32.362210 | orchestrator | 00:01:32.360 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-16 00:01:32.362214 | orchestrator | 00:01:32.360 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.362218 | orchestrator | 00:01:32.360 STDOUT terraform:  + device_id = (known after apply) 2025-09-16 00:01:32.362221 | orchestrator | 00:01:32.360 STDOUT terraform:  + device_owner = (known after apply) 2025-09-16 00:01:32.362229 | orchestrator | 00:01:32.360 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-16 00:01:32.362233 | orchestrator | 00:01:32.360 STDOUT terraform:  + dns_name = (known after apply) 2025-09-16 00:01:32.362239 | orchestrator | 00:01:32.360 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.362243 | orchestrator | 00:01:32.360 STDOUT terraform:  + mac_address = (known after apply) 2025-09-16 00:01:32.362247 | orchestrator | 00:01:32.360 STDOUT terraform:  + network_id = (known after apply) 2025-09-16 00:01:32.362250 | orchestrator | 00:01:32.360 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-16 00:01:32.362254 | orchestrator | 00:01:32.360 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-16 00:01:32.362258 | orchestrator | 00:01:32.360 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.362261 | orchestrator | 00:01:32.360 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-16 00:01:32.362265 | orchestrator | 00:01:32.360 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.362269 | orchestrator | 00:01:32.360 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362273 | orchestrator | 00:01:32.360 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-16 00:01:32.362280 | orchestrator | 00:01:32.360 STDOUT terraform:  } 2025-09-16 00:01:32.362284 | orchestrator | 00:01:32.360 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362288 | orchestrator | 00:01:32.360 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-16 00:01:32.362292 | orchestrator | 00:01:32.360 STDOUT terraform:  } 2025-09-16 00:01:32.362296 | orchestrator | 00:01:32.360 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362299 | orchestrator | 00:01:32.360 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-16 00:01:32.362303 | orchestrator | 00:01:32.360 STDOUT terraform:  } 2025-09-16 00:01:32.362307 | orchestrator | 00:01:32.360 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362311 | orchestrator | 00:01:32.360 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-16 00:01:32.362315 | orchestrator | 00:01:32.360 STDOUT terraform:  } 2025-09-16 00:01:32.362318 | orchestrator | 00:01:32.360 STDOUT terraform:  + binding (known after apply) 2025-09-16 00:01:32.362322 | orchestrator | 00:01:32.360 STDOUT terraform:  + fixed_ip { 2025-09-16 00:01:32.362326 | orchestrator | 00:01:32.360 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-16 00:01:32.362330 | orchestrator | 00:01:32.360 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-16 00:01:32.362334 | orchestrator | 00:01:32.360 STDOUT terraform:  } 2025-09-16 00:01:32.362338 | orchestrator | 00:01:32.360 STDOUT terraform:  } 2025-09-16 00:01:32.362341 | orchestrator | 00:01:32.360 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-16 00:01:32.362345 | orchestrator | 00:01:32.361 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-16 00:01:32.362349 | orchestrator | 00:01:32.361 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-16 00:01:32.362353 | orchestrator | 00:01:32.361 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-16 00:01:32.362357 | orchestrator | 00:01:32.361 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-16 00:01:32.362360 | orchestrator | 00:01:32.361 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.362368 | orchestrator | 00:01:32.361 STDOUT terraform:  + device_id = (known after apply) 2025-09-16 00:01:32.362372 | orchestrator | 00:01:32.361 STDOUT terraform:  + device_owner = (known after apply) 2025-09-16 00:01:32.362376 | orchestrator | 00:01:32.361 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-16 00:01:32.362380 | orchestrator | 00:01:32.361 STDOUT terraform:  + dns_name = (known after apply) 2025-09-16 00:01:32.362384 | orchestrator | 00:01:32.361 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.362387 | orchestrator | 00:01:32.361 STDOUT terraform:  + mac_address = (known after apply) 2025-09-16 00:01:32.362391 | orchestrator | 00:01:32.361 STDOUT terraform:  + network_id = (known after apply) 2025-09-16 00:01:32.362397 | orchestrator | 00:01:32.361 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-16 00:01:32.362408 | orchestrator | 00:01:32.361 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-16 00:01:32.362412 | orchestrator | 00:01:32.361 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.362416 | orchestrator | 00:01:32.361 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-16 00:01:32.362420 | orchestrator | 00:01:32.361 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.362423 | orchestrator | 00:01:32.361 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362427 | orchestrator | 00:01:32.361 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-16 00:01:32.362431 | orchestrator | 00:01:32.361 STDOUT terraform:  } 2025-09-16 00:01:32.362435 | orchestrator | 00:01:32.361 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362439 | orchestrator | 00:01:32.361 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-16 00:01:32.362442 | orchestrator | 00:01:32.361 STDOUT terraform:  } 2025-09-16 00:01:32.362446 | orchestrator | 00:01:32.361 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362450 | orchestrator | 00:01:32.361 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-16 00:01:32.362454 | orchestrator | 00:01:32.361 STDOUT terraform:  } 2025-09-16 00:01:32.362457 | orchestrator | 00:01:32.361 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362461 | orchestrator | 00:01:32.361 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-16 00:01:32.362465 | orchestrator | 00:01:32.361 STDOUT terraform:  } 2025-09-16 00:01:32.362469 | orchestrator | 00:01:32.361 STDOUT terraform:  + binding (known after apply) 2025-09-16 00:01:32.362472 | orchestrator | 00:01:32.361 STDOUT terraform:  + fixed_ip { 2025-09-16 00:01:32.362476 | orchestrator | 00:01:32.361 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-16 00:01:32.362480 | orchestrator | 00:01:32.361 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-16 00:01:32.362484 | orchestrator | 00:01:32.361 STDOUT terraform:  } 2025-09-16 00:01:32.362488 | orchestrator | 00:01:32.361 STDOUT terraform:  } 2025-09-16 00:01:32.362491 | orchestrator | 00:01:32.361 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-16 00:01:32.362495 | orchestrator | 00:01:32.361 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-16 00:01:32.362499 | orchestrator | 00:01:32.361 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-16 00:01:32.362503 | orchestrator | 00:01:32.361 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-16 00:01:32.362507 | orchestrator | 00:01:32.361 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-16 00:01:32.362510 | orchestrator | 00:01:32.362 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.362514 | orchestrator | 00:01:32.362 STDOUT terraform:  + device_id = (known after apply) 2025-09-16 00:01:32.362518 | orchestrator | 00:01:32.362 STDOUT terraform:  + device_owner = (known after apply) 2025-09-16 00:01:32.362524 | orchestrator | 00:01:32.362 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-16 00:01:32.362532 | orchestrator | 00:01:32.362 STDOUT terraform:  + dns_name = (known after apply) 2025-09-16 00:01:32.362536 | orchestrator | 00:01:32.362 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.362540 | orchestrator | 00:01:32.362 STDOUT terraform:  + mac_address = (known after apply) 2025-09-16 00:01:32.362544 | orchestrator | 00:01:32.362 STDOUT terraform:  + network_id = (known after apply) 2025-09-16 00:01:32.362547 | orchestrator | 00:01:32.362 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-16 00:01:32.362551 | orchestrator | 00:01:32.362 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-16 00:01:32.362555 | orchestrator | 00:01:32.362 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.362559 | orchestrator | 00:01:32.362 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-16 00:01:32.362563 | orchestrator | 00:01:32.362 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.362566 | orchestrator | 00:01:32.362 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362570 | orchestrator | 00:01:32.362 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-16 00:01:32.362574 | orchestrator | 00:01:32.362 STDOUT terraform:  } 2025-09-16 00:01:32.362578 | orchestrator | 00:01:32.362 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362581 | orchestrator | 00:01:32.362 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-16 00:01:32.362585 | orchestrator | 00:01:32.362 STDOUT terraform:  } 2025-09-16 00:01:32.362591 | orchestrator | 00:01:32.362 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362595 | orchestrator | 00:01:32.362 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-16 00:01:32.362599 | orchestrator | 00:01:32.362 STDOUT terraform:  } 2025-09-16 00:01:32.362603 | orchestrator | 00:01:32.362 STDOUT terraform:  + allowed_address_pairs { 2025-09-16 00:01:32.362608 | orchestrator | 00:01:32.362 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-16 00:01:32.362614 | orchestrator | 00:01:32.362 STDOUT terraform:  } 2025-09-16 00:01:32.362653 | orchestrator | 00:01:32.362 STDOUT terraform:  + binding (known after apply) 2025-09-16 00:01:32.362658 | orchestrator | 00:01:32.362 STDOUT terraform:  + fixed_ip { 2025-09-16 00:01:32.362676 | orchestrator | 00:01:32.362 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-16 00:01:32.362745 | orchestrator | 00:01:32.362 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-16 00:01:32.362751 | orchestrator | 00:01:32.362 STDOUT terraform:  } 2025-09-16 00:01:32.362756 | orchestrator | 00:01:32.362 STDOUT terraform:  } 2025-09-16 00:01:32.362785 | orchestrator | 00:01:32.362 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-16 00:01:32.362837 | orchestrator | 00:01:32.362 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-16 00:01:32.362846 | orchestrator | 00:01:32.362 STDOUT terraform:  + force_destroy = false 2025-09-16 00:01:32.362908 | orchestrator | 00:01:32.362 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.362920 | orchestrator | 00:01:32.362 STDOUT terraform:  + port_id = (known after apply) 2025-09-16 00:01:32.362926 | orchestrator | 00:01:32.362 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.362956 | orchestrator | 00:01:32.362 STDOUT terraform:  + router_id = (known after apply) 2025-09-16 00:01:32.362966 | orchestrator | 00:01:32.362 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-16 00:01:32.362972 | orchestrator | 00:01:32.362 STDOUT terraform:  } 2025-09-16 00:01:32.363030 | orchestrator | 00:01:32.362 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-16 00:01:32.363041 | orchestrator | 00:01:32.363 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-16 00:01:32.363075 | orchestrator | 00:01:32.363 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-16 00:01:32.363140 | orchestrator | 00:01:32.363 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.363148 | orchestrator | 00:01:32.363 STDOUT terraform:  + availability_zone_hints = [ 2025-09-16 00:01:32.363153 | orchestrator | 00:01:32.363 STDOUT terraform:  + "nova", 2025-09-16 00:01:32.363159 | orchestrator | 00:01:32.363 STDOUT terraform:  ] 2025-09-16 00:01:32.363212 | orchestrator | 00:01:32.363 STDOUT terraform:  + distributed = (known after apply) 2025-09-16 00:01:32.363221 | orchestrator | 00:01:32.363 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-16 00:01:32.363273 | orchestrator | 00:01:32.363 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-16 00:01:32.363288 | orchestrator | 00:01:32.363 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-16 00:01:32.363335 | orchestrator | 00:01:32.363 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.363345 | orchestrator | 00:01:32.363 STDOUT terraform:  + name = "testbed" 2025-09-16 00:01:32.363404 | orchestrator | 00:01:32.363 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.363413 | orchestrator | 00:01:32.363 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.363463 | orchestrator | 00:01:32.363 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-16 00:01:32.363469 | orchestrator | 00:01:32.363 STDOUT terraform:  } 2025-09-16 00:01:32.363500 | orchestrator | 00:01:32.363 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-16 00:01:32.363552 | orchestrator | 00:01:32.363 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-16 00:01:32.363560 | orchestrator | 00:01:32.363 STDOUT terraform:  + description = "ssh" 2025-09-16 00:01:32.363595 | orchestrator | 00:01:32.363 STDOUT terraform:  + direction = "ingress" 2025-09-16 00:01:32.363627 | orchestrator | 00:01:32.363 STDOUT terraform:  + ethertype = "IPv4" 2025-09-16 00:01:32.363676 | orchestrator | 00:01:32.363 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.363684 | orchestrator | 00:01:32.363 STDOUT terraform:  + port_range_max = 22 2025-09-16 00:01:32.363690 | orchestrator | 00:01:32.363 STDOUT terraform:  + port_range_min = 22 2025-09-16 00:01:32.363738 | orchestrator | 00:01:32.363 STDOUT terraform:  + protocol = "tcp" 2025-09-16 00:01:32.363761 | orchestrator | 00:01:32.363 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.363793 | orchestrator | 00:01:32.363 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-16 00:01:32.363834 | orchestrator | 00:01:32.363 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-16 00:01:32.363840 | orchestrator | 00:01:32.363 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-16 00:01:32.363915 | orchestrator | 00:01:32.363 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-16 00:01:32.363923 | orchestrator | 00:01:32.363 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.363927 | orchestrator | 00:01:32.363 STDOUT terraform:  } 2025-09-16 00:01:32.363979 | orchestrator | 00:01:32.363 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-16 00:01:32.364023 | orchestrator | 00:01:32.363 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-16 00:01:32.364065 | orchestrator | 00:01:32.364 STDOUT terraform:  + description = "wireguard" 2025-09-16 00:01:32.364075 | orchestrator | 00:01:32.364 STDOUT terraform:  + direction = "ingress" 2025-09-16 00:01:32.364124 | orchestrator | 00:01:32.364 STDOUT terraform:  + ethertype = "IPv4" 2025-09-16 00:01:32.364132 | orchestrator | 00:01:32.364 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.364160 | orchestrator | 00:01:32.364 STDOUT terraform:  + port_range_max = 51820 2025-09-16 00:01:32.364170 | orchestrator | 00:01:32.364 STDOUT terraform:  + port_range_min = 51820 2025-09-16 00:01:32.364208 | orchestrator | 00:01:32.364 STDOUT terraform:  + protocol = "udp" 2025-09-16 00:01:32.364270 | orchestrator | 00:01:32.364 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.364275 | orchestrator | 00:01:32.364 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-16 00:01:32.364308 | orchestrator | 00:01:32.364 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-16 00:01:32.364315 | orchestrator | 00:01:32.364 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-16 00:01:32.364357 | orchestrator | 00:01:32.364 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-16 00:01:32.364424 | orchestrator | 00:01:32.364 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.364433 | orchestrator | 00:01:32.364 STDOUT terraform:  } 2025-09-16 00:01:32.364439 | orchestrator | 00:01:32.364 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-16 00:01:32.364490 | orchestrator | 00:01:32.364 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-16 00:01:32.364564 | orchestrator | 00:01:32.364 STDOUT terraform:  + direction = "ingress" 2025-09-16 00:01:32.364572 | orchestrator | 00:01:32.364 STDOUT terraform:  + ethertype = "IPv4" 2025-09-16 00:01:32.364582 | orchestrator | 00:01:32.364 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.364587 | orchestrator | 00:01:32.364 STDOUT terraform:  + protocol = "tcp" 2025-09-16 00:01:32.364635 | orchestrator | 00:01:32.364 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.364677 | orchestrator | 00:01:32.364 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-16 00:01:32.364687 | orchestrator | 00:01:32.364 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-16 00:01:32.364757 | orchestrator | 00:01:32.364 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-16 00:01:32.364795 | orchestrator | 00:01:32.364 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-16 00:01:32.364853 | orchestrator | 00:01:32.364 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.364858 | orchestrator | 00:01:32.364 STDOUT terraform:  } 2025-09-16 00:01:32.364895 | orchestrator | 00:01:32.364 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-16 00:01:32.364949 | orchestrator | 00:01:32.364 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-16 00:01:32.364957 | orchestrator | 00:01:32.364 STDOUT terraform:  + direction = "ingress" 2025-09-16 00:01:32.365007 | orchestrator | 00:01:32.364 STDOUT terraform:  + ethertype = "IPv4" 2025-09-16 00:01:32.365012 | orchestrator | 00:01:32.364 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.365038 | orchestrator | 00:01:32.365 STDOUT terraform:  + protocol = "udp" 2025-09-16 00:01:32.365075 | orchestrator | 00:01:32.365 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.365108 | orchestrator | 00:01:32.365 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-16 00:01:32.365156 | orchestrator | 00:01:32.365 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-16 00:01:32.365164 | orchestrator | 00:01:32.365 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-16 00:01:32.365206 | orchestrator | 00:01:32.365 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-16 00:01:32.365245 | orchestrator | 00:01:32.365 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.365252 | orchestrator | 00:01:32.365 STDOUT terraform:  } 2025-09-16 00:01:32.365302 | orchestrator | 00:01:32.365 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-16 00:01:32.365367 | orchestrator | 00:01:32.365 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-16 00:01:32.365377 | orchestrator | 00:01:32.365 STDOUT terraform:  + direction = "ingress" 2025-09-16 00:01:32.365424 | orchestrator | 00:01:32.365 STDOUT terraform:  + ethertype = "IPv4" 2025-09-16 00:01:32.365434 | orchestrator | 00:01:32.365 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.365478 | orchestrator | 00:01:32.365 STDOUT terraform:  + protocol = "icmp" 2025-09-16 00:01:32.365495 | orchestrator | 00:01:32.365 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.365541 | orchestrator | 00:01:32.365 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-16 00:01:32.365552 | orchestrator | 00:01:32.365 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-16 00:01:32.365590 | orchestrator | 00:01:32.365 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-16 00:01:32.365654 | orchestrator | 00:01:32.365 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-16 00:01:32.365660 | orchestrator | 00:01:32.365 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.365664 | orchestrator | 00:01:32.365 STDOUT terraform:  } 2025-09-16 00:01:32.365729 | orchestrator | 00:01:32.365 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-16 00:01:32.365767 | orchestrator | 00:01:32.365 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-16 00:01:32.365809 | orchestrator | 00:01:32.365 STDOUT terraform:  + direction = "ingress" 2025-09-16 00:01:32.365816 | orchestrator | 00:01:32.365 STDOUT terraform:  + ethertype = "IPv4" 2025-09-16 00:01:32.365852 | orchestrator | 00:01:32.365 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.365859 | orchestrator | 00:01:32.365 STDOUT terraform:  + protocol = "tcp" 2025-09-16 00:01:32.365904 | orchestrator | 00:01:32.365 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.365943 | orchestrator | 00:01:32.365 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-16 00:01:32.365969 | orchestrator | 00:01:32.365 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-16 00:01:32.366038 | orchestrator | 00:01:32.365 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-16 00:01:32.366044 | orchestrator | 00:01:32.365 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-16 00:01:32.366086 | orchestrator | 00:01:32.366 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.366094 | orchestrator | 00:01:32.366 STDOUT terraform:  } 2025-09-16 00:01:32.366148 | orchestrator | 00:01:32.366 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-16 00:01:32.366193 | orchestrator | 00:01:32.366 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-16 00:01:32.366201 | orchestrator | 00:01:32.366 STDOUT terraform:  + direction = "ingress" 2025-09-16 00:01:32.366263 | orchestrator | 00:01:32.366 STDOUT terraform:  + ethertype = "IPv4" 2025-09-16 00:01:32.366272 | orchestrator | 00:01:32.366 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.366277 | orchestrator | 00:01:32.366 STDOUT terraform:  + protocol = "udp" 2025-09-16 00:01:32.366335 | orchestrator | 00:01:32.366 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.366342 | orchestrator | 00:01:32.366 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-16 00:01:32.366390 | orchestrator | 00:01:32.366 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-16 00:01:32.366405 | orchestrator | 00:01:32.366 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-16 00:01:32.366442 | orchestrator | 00:01:32.366 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-16 00:01:32.366485 | orchestrator | 00:01:32.366 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.366490 | orchestrator | 00:01:32.366 STDOUT terraform:  } 2025-09-16 00:01:32.366530 | orchestrator | 00:01:32.366 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-16 00:01:32.366575 | orchestrator | 00:01:32.366 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-16 00:01:32.366621 | orchestrator | 00:01:32.366 STDOUT terraform:  + direction = "ingress" 2025-09-16 00:01:32.366627 | orchestrator | 00:01:32.366 STDOUT terraform:  + ethertype = "IPv4" 2025-09-16 00:01:32.366711 | orchestrator | 00:01:32.366 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.366717 | orchestrator | 00:01:32.366 STDOUT terraform:  + protocol = "icmp" 2025-09-16 00:01:32.366723 | orchestrator | 00:01:32.366 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.366768 | orchestrator | 00:01:32.366 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-16 00:01:32.366778 | orchestrator | 00:01:32.366 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-16 00:01:32.366843 | orchestrator | 00:01:32.366 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-16 00:01:32.366852 | orchestrator | 00:01:32.366 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-16 00:01:32.366879 | orchestrator | 00:01:32.366 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.366889 | orchestrator | 00:01:32.366 STDOUT terraform:  } 2025-09-16 00:01:32.366953 | orchestrator | 00:01:32.366 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-16 00:01:32.367003 | orchestrator | 00:01:32.366 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-16 00:01:32.367009 | orchestrator | 00:01:32.366 STDOUT terraform:  + description = "vrrp" 2025-09-16 00:01:32.367052 | orchestrator | 00:01:32.366 STDOUT terraform:  + direction = "ingress" 2025-09-16 00:01:32.367061 | orchestrator | 00:01:32.367 STDOUT terraform:  + ethertype = "IPv4" 2025-09-16 00:01:32.367123 | orchestrator | 00:01:32.367 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.367129 | orchestrator | 00:01:32.367 STDOUT terraform:  + protocol = "112" 2025-09-16 00:01:32.367135 | orchestrator | 00:01:32.367 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.367213 | orchestrator | 00:01:32.367 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-16 00:01:32.367222 | orchestrator | 00:01:32.367 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-16 00:01:32.367228 | orchestrator | 00:01:32.367 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-16 00:01:32.367280 | orchestrator | 00:01:32.367 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-16 00:01:32.367287 | orchestrator | 00:01:32.367 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.367325 | orchestrator | 00:01:32.367 STDOUT terraform:  } 2025-09-16 00:01:32.367354 | orchestrator | 00:01:32.367 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-16 00:01:32.367407 | orchestrator | 00:01:32.367 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-16 00:01:32.367414 | orchestrator | 00:01:32.367 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.367467 | orchestrator | 00:01:32.367 STDOUT terraform:  + description = "management security group" 2025-09-16 00:01:32.367475 | orchestrator | 00:01:32.367 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.367508 | orchestrator | 00:01:32.367 STDOUT terraform:  + name = "testbed-management" 2025-09-16 00:01:32.367515 | orchestrator | 00:01:32.367 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.370052 | orchestrator | 00:01:32.367 STDOUT terraform:  + stateful = (known after apply) 2025-09-16 00:01:32.370061 | orchestrator | 00:01:32.367 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.370065 | orchestrator | 00:01:32.367 STDOUT terraform:  } 2025-09-16 00:01:32.370069 | orchestrator | 00:01:32.367 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-16 00:01:32.370073 | orchestrator | 00:01:32.367 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-16 00:01:32.370081 | orchestrator | 00:01:32.367 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.370085 | orchestrator | 00:01:32.367 STDOUT terraform:  + description = "node security group" 2025-09-16 00:01:32.370089 | orchestrator | 00:01:32.367 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.370093 | orchestrator | 00:01:32.367 STDOUT terraform:  + name = "testbed-node" 2025-09-16 00:01:32.370097 | orchestrator | 00:01:32.367 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.370101 | orchestrator | 00:01:32.367 STDOUT terraform:  + stateful = (known after apply) 2025-09-16 00:01:32.370105 | orchestrator | 00:01:32.367 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.370109 | orchestrator | 00:01:32.367 STDOUT terraform:  } 2025-09-16 00:01:32.370113 | orchestrator | 00:01:32.367 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-16 00:01:32.370117 | orchestrator | 00:01:32.367 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-16 00:01:32.370121 | orchestrator | 00:01:32.367 STDOUT terraform:  + all_tags = (known after apply) 2025-09-16 00:01:32.370125 | orchestrator | 00:01:32.367 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-16 00:01:32.370129 | orchestrator | 00:01:32.367 STDOUT terraform:  + dns_nameservers = [ 2025-09-16 00:01:32.370133 | orchestrator | 00:01:32.367 STDOUT terraform:  + "8.8.8.8", 2025-09-16 00:01:32.370137 | orchestrator | 00:01:32.367 STDOUT terraform:  + "9.9.9.9", 2025-09-16 00:01:32.370149 | orchestrator | 00:01:32.367 STDOUT terraform:  ] 2025-09-16 00:01:32.370153 | orchestrator | 00:01:32.367 STDOUT terraform:  + enable_dhcp = true 2025-09-16 00:01:32.370157 | orchestrator | 00:01:32.368 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-16 00:01:32.370161 | orchestrator | 00:01:32.368 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.370165 | orchestrator | 00:01:32.368 STDOUT terraform:  + ip_version = 4 2025-09-16 00:01:32.370169 | orchestrator | 00:01:32.368 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-16 00:01:32.370173 | orchestrator | 00:01:32.368 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-16 00:01:32.370177 | orchestrator | 00:01:32.368 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-16 00:01:32.370181 | orchestrator | 00:01:32.368 STDOUT terraform:  + network_id = (known after apply) 2025-09-16 00:01:32.370185 | orchestrator | 00:01:32.368 STDOUT terraform:  + no_gateway = false 2025-09-16 00:01:32.370189 | orchestrator | 00:01:32.368 STDOUT terraform:  + region = (known after apply) 2025-09-16 00:01:32.370193 | orchestrator | 00:01:32.368 STDOUT terraform:  + service_types = (known after apply) 2025-09-16 00:01:32.370196 | orchestrator | 00:01:32.368 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-16 00:01:32.370200 | orchestrator | 00:01:32.368 STDOUT terraform:  + allocation_pool { 2025-09-16 00:01:32.370204 | orchestrator | 00:01:32.368 STDOUT terraform:  + end = "192.168.31.250" 2025-09-16 00:01:32.370208 | orchestrator | 00:01:32.368 STDOUT terraform:  + start = "192.168.31.200 2025-09-16 00:01:32.370212 | orchestrator | 00:01:32.368 STDOUT terraform: " 2025-09-16 00:01:32.370216 | orchestrator | 00:01:32.368 STDOUT terraform:  } 2025-09-16 00:01:32.370220 | orchestrator | 00:01:32.368 STDOUT terraform:  } 2025-09-16 00:01:32.370228 | orchestrator | 00:01:32.368 STDOUT terraform:  # terraform_data.image will be created 2025-09-16 00:01:32.370232 | orchestrator | 00:01:32.368 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-16 00:01:32.370236 | orchestrator | 00:01:32.368 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.370240 | orchestrator | 00:01:32.368 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-16 00:01:32.370244 | orchestrator | 00:01:32.368 STDOUT terraform:  + output = (known after apply) 2025-09-16 00:01:32.370248 | orchestrator | 00:01:32.368 STDOUT terraform:  } 2025-09-16 00:01:32.370255 | orchestrator | 00:01:32.368 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-16 00:01:32.370258 | orchestrator | 00:01:32.368 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-16 00:01:32.370262 | orchestrator | 00:01:32.368 STDOUT terraform:  + id = (known after apply) 2025-09-16 00:01:32.370266 | orchestrator | 00:01:32.368 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-16 00:01:32.370270 | orchestrator | 00:01:32.368 STDOUT terraform:  + output = (known after apply) 2025-09-16 00:01:32.370274 | orchestrator | 00:01:32.368 STDOUT terraform:  } 2025-09-16 00:01:32.370277 | orchestrator | 00:01:32.368 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-16 00:01:32.370285 | orchestrator | 00:01:32.368 STDOUT terraform: Changes to Outputs: 2025-09-16 00:01:32.370290 | orchestrator | 00:01:32.368 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-16 00:01:32.370293 | orchestrator | 00:01:32.368 STDOUT terraform:  + private_key = (sensitive value) 2025-09-16 00:01:32.555898 | orchestrator | 00:01:32.555 STDOUT terraform: terraform_data.image: Creating... 2025-09-16 00:01:32.555960 | orchestrator | 00:01:32.555 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-16 00:01:32.555967 | orchestrator | 00:01:32.555 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=7a9e1804-48fd-6481-6f57-4c9cf3ed668c] 2025-09-16 00:01:32.555983 | orchestrator | 00:01:32.555 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=620e68bb-3738-4f60-5965-a13711f4a757] 2025-09-16 00:01:32.579117 | orchestrator | 00:01:32.578 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-16 00:01:32.579181 | orchestrator | 00:01:32.579 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-16 00:01:32.582950 | orchestrator | 00:01:32.582 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-16 00:01:32.583269 | orchestrator | 00:01:32.583 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-16 00:01:32.583942 | orchestrator | 00:01:32.583 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-16 00:01:32.585683 | orchestrator | 00:01:32.585 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-16 00:01:32.585924 | orchestrator | 00:01:32.585 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-16 00:01:32.586641 | orchestrator | 00:01:32.586 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-16 00:01:32.587454 | orchestrator | 00:01:32.587 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-16 00:01:32.598316 | orchestrator | 00:01:32.597 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-16 00:01:33.055224 | orchestrator | 00:01:33.054 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-16 00:01:33.062227 | orchestrator | 00:01:33.060 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-16 00:01:33.072619 | orchestrator | 00:01:33.071 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-16 00:01:33.078977 | orchestrator | 00:01:33.078 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-16 00:01:33.663293 | orchestrator | 00:01:33.662 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=4e3e61b3-3d81-4d49-9e4c-95d0d6ec0d78] 2025-09-16 00:01:33.668312 | orchestrator | 00:01:33.668 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-16 00:01:33.796608 | orchestrator | 00:01:33.796 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-09-16 00:01:33.809368 | orchestrator | 00:01:33.809 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-16 00:01:36.285844 | orchestrator | 00:01:36.285 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=5c63af0b-1be6-4a9c-8f35-a4445080f1db] 2025-09-16 00:01:36.289927 | orchestrator | 00:01:36.289 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=ebe7fd99-ddf0-4119-8dea-cb8b427f2aed] 2025-09-16 00:01:36.293976 | orchestrator | 00:01:36.293 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-16 00:01:36.296338 | orchestrator | 00:01:36.296 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=216f9756-46fe-48b3-8a57-6cc5b7e0c275] 2025-09-16 00:01:36.303984 | orchestrator | 00:01:36.302 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=a99d92e2-a7d0-4115-a3b5-db7bfa0170a9] 2025-09-16 00:01:36.305876 | orchestrator | 00:01:36.305 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-16 00:01:36.309141 | orchestrator | 00:01:36.308 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-16 00:01:36.309995 | orchestrator | 00:01:36.309 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-16 00:01:36.320917 | orchestrator | 00:01:36.320 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=da9e83cb-2e5e-4388-ad73-1879a24665a3] 2025-09-16 00:01:36.332381 | orchestrator | 00:01:36.332 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-16 00:01:36.343899 | orchestrator | 00:01:36.343 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=f8c86b93-6440-4cc6-ba3c-00ae05f2a443] 2025-09-16 00:01:36.353035 | orchestrator | 00:01:36.352 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-16 00:01:36.357217 | orchestrator | 00:01:36.357 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=ad9de541-7002-4a51-9253-a212a9f46ca2] 2025-09-16 00:01:36.365612 | orchestrator | 00:01:36.365 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-16 00:01:36.380896 | orchestrator | 00:01:36.380 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=46481bd5-1fc4-4619-9f81-82a2d5c944be] 2025-09-16 00:01:36.391649 | orchestrator | 00:01:36.391 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=6b7c66eb-e150-40bb-863f-cd4924cbb0ab] 2025-09-16 00:01:36.391691 | orchestrator | 00:01:36.391 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-16 00:01:36.398417 | orchestrator | 00:01:36.398 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=4c3c191ab2add752606bce31c02af7742964d0c3] 2025-09-16 00:01:36.398917 | orchestrator | 00:01:36.398 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-16 00:01:36.402671 | orchestrator | 00:01:36.402 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=106de4ac778ffb8adb03d8a31a321f154a8bf0d3] 2025-09-16 00:01:37.147366 | orchestrator | 00:01:37.146 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=9d0f5288-98d8-49aa-a26a-aae2304ebcdf] 2025-09-16 00:01:37.841451 | orchestrator | 00:01:37.841 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=8eebc9de-9ece-4df0-adf8-f003642d8def] 2025-09-16 00:01:37.850161 | orchestrator | 00:01:37.849 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-16 00:01:39.693123 | orchestrator | 00:01:39.692 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370] 2025-09-16 00:01:39.703819 | orchestrator | 00:01:39.703 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=accc6646-5f95-4ba9-892c-603bcb8fd4c4] 2025-09-16 00:01:39.716597 | orchestrator | 00:01:39.716 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=16f6c2a1-43c7-4984-96fd-7906308a93da] 2025-09-16 00:01:39.733314 | orchestrator | 00:01:39.733 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=073617aa-e75d-409e-9d4b-061b932bfcf4] 2025-09-16 00:01:39.752870 | orchestrator | 00:01:39.752 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=527c098b-264c-4a31-af0b-91dc94de5595] 2025-09-16 00:01:39.779623 | orchestrator | 00:01:39.779 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=7ac06042-f4a7-4271-9a4d-3c94b3e784c4] 2025-09-16 00:01:40.668808 | orchestrator | 00:01:40.668 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=44b2f1ec-e677-4a3e-b495-fff853b38925] 2025-09-16 00:01:40.679655 | orchestrator | 00:01:40.679 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-16 00:01:40.679735 | orchestrator | 00:01:40.679 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-16 00:01:40.681486 | orchestrator | 00:01:40.681 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-16 00:01:40.871446 | orchestrator | 00:01:40.871 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=42b497f6-ffa5-4f84-9ff2-59160f21e42c] 2025-09-16 00:01:40.879671 | orchestrator | 00:01:40.879 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-16 00:01:40.880350 | orchestrator | 00:01:40.880 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-16 00:01:40.881270 | orchestrator | 00:01:40.881 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-16 00:01:40.884046 | orchestrator | 00:01:40.883 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-16 00:01:40.884837 | orchestrator | 00:01:40.884 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-16 00:01:40.886528 | orchestrator | 00:01:40.886 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-16 00:01:41.095524 | orchestrator | 00:01:41.093 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=347dab90-4aef-4d59-a35e-45b1f677f94e] 2025-09-16 00:01:41.100145 | orchestrator | 00:01:41.099 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-16 00:01:41.102667 | orchestrator | 00:01:41.102 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-16 00:01:41.104958 | orchestrator | 00:01:41.104 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-16 00:01:41.118870 | orchestrator | 00:01:41.118 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=bbf18750-6fb8-4276-9f0a-5cd996f0d1f9] 2025-09-16 00:01:41.135021 | orchestrator | 00:01:41.134 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-16 00:01:41.282517 | orchestrator | 00:01:41.281 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=e9a6fa8c-916b-45fe-bfc5-3521b35f4528] 2025-09-16 00:01:41.293809 | orchestrator | 00:01:41.293 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-16 00:01:41.296656 | orchestrator | 00:01:41.296 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=10545758-f441-4b93-9738-227d2e38c695] 2025-09-16 00:01:41.307858 | orchestrator | 00:01:41.307 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-16 00:01:41.471202 | orchestrator | 00:01:41.470 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=d75022ef-b0dd-4e61-ad5a-3e7f451d14ca] 2025-09-16 00:01:41.486608 | orchestrator | 00:01:41.486 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-16 00:01:41.523478 | orchestrator | 00:01:41.523 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=0e37d6d7-cecb-4634-8105-63326bafdcc7] 2025-09-16 00:01:41.541209 | orchestrator | 00:01:41.541 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-16 00:01:41.699372 | orchestrator | 00:01:41.699 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=cfae3754-b549-4b79-94f9-46e727756f91] 2025-09-16 00:01:41.710615 | orchestrator | 00:01:41.710 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-16 00:01:41.721066 | orchestrator | 00:01:41.720 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=4057bce6-cc03-46d7-9a68-42bc07b93bea] 2025-09-16 00:01:41.726832 | orchestrator | 00:01:41.726 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-16 00:01:41.793146 | orchestrator | 00:01:41.792 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=d68628ca-ba06-411a-bc90-fd8f664541f1] 2025-09-16 00:01:42.058836 | orchestrator | 00:01:42.058 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=c06faf9c-0387-4c30-bca7-1bde9bcfe7f3] 2025-09-16 00:01:42.096850 | orchestrator | 00:01:42.096 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=cf7728e9-045a-4fc3-a883-36ba796149a3] 2025-09-16 00:01:42.116174 | orchestrator | 00:01:42.115 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=945bb886-9d0e-48e1-aa0b-b659c9e29d36] 2025-09-16 00:01:42.328864 | orchestrator | 00:01:42.328 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=2698c67a-16d6-4e35-8895-c331d15f49cf] 2025-09-16 00:01:42.387402 | orchestrator | 00:01:42.387 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=e1e06eba-8896-4378-8c5a-9e6c356cc9ae] 2025-09-16 00:01:42.589061 | orchestrator | 00:01:42.588 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=79f7fda5-63cf-4403-b35e-7257b6930d4f] 2025-09-16 00:01:42.619071 | orchestrator | 00:01:42.618 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=9f38b7a5-96a5-4587-89ad-35fa70e92f00] 2025-09-16 00:01:42.911960 | orchestrator | 00:01:42.911 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=9f50aac6-69cc-4e43-83ff-a242b48890f4] 2025-09-16 00:01:44.170417 | orchestrator | 00:01:44.169 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=35613506-0339-4cf6-84d5-d0b7be027f5c] 2025-09-16 00:01:44.199514 | orchestrator | 00:01:44.199 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-16 00:01:44.207732 | orchestrator | 00:01:44.207 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-16 00:01:44.210433 | orchestrator | 00:01:44.210 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-16 00:01:44.210624 | orchestrator | 00:01:44.210 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-16 00:01:44.210854 | orchestrator | 00:01:44.210 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-16 00:01:44.226668 | orchestrator | 00:01:44.226 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-16 00:01:44.228245 | orchestrator | 00:01:44.228 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-16 00:01:46.074809 | orchestrator | 00:01:46.072 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=12646e75-79ca-4c35-93d9-08b1d2dca705] 2025-09-16 00:01:46.091129 | orchestrator | 00:01:46.090 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-16 00:01:46.091216 | orchestrator | 00:01:46.091 STDOUT terraform: local_file.inventory: Creating... 2025-09-16 00:01:46.091230 | orchestrator | 00:01:46.091 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-16 00:01:46.104014 | orchestrator | 00:01:46.103 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=ab27075c96c6da248744868188c757d1516d606a] 2025-09-16 00:01:46.104885 | orchestrator | 00:01:46.104 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=284d5c81378d8fce8cf49b07d26b9660414fda9f] 2025-09-16 00:01:46.812475 | orchestrator | 00:01:46.812 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=12646e75-79ca-4c35-93d9-08b1d2dca705] 2025-09-16 00:01:54.212463 | orchestrator | 00:01:54.212 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-16 00:01:54.212588 | orchestrator | 00:01:54.212 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-16 00:01:54.212606 | orchestrator | 00:01:54.212 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-16 00:01:54.212898 | orchestrator | 00:01:54.212 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-16 00:01:54.227679 | orchestrator | 00:01:54.227 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-16 00:01:54.229028 | orchestrator | 00:01:54.228 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-16 00:02:04.213368 | orchestrator | 00:02:04.213 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-16 00:02:04.213529 | orchestrator | 00:02:04.213 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-16 00:02:04.213802 | orchestrator | 00:02:04.213 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-16 00:02:04.213982 | orchestrator | 00:02:04.213 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-16 00:02:04.228501 | orchestrator | 00:02:04.228 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-16 00:02:04.229755 | orchestrator | 00:02:04.229 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-16 00:02:14.216924 | orchestrator | 00:02:14.216 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-16 00:02:14.217085 | orchestrator | 00:02:14.216 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-16 00:02:14.217535 | orchestrator | 00:02:14.217 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-16 00:02:14.217575 | orchestrator | 00:02:14.217 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-16 00:02:14.229088 | orchestrator | 00:02:14.228 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-16 00:02:14.230269 | orchestrator | 00:02:14.230 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-16 00:02:14.818486 | orchestrator | 00:02:14.818 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=69608a15-c670-4cb8-a5d6-a323167bbc69] 2025-09-16 00:02:14.874627 | orchestrator | 00:02:14.874 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=7f8b13bb-f92d-44b6-8e71-3fe8fd669cda] 2025-09-16 00:02:14.960738 | orchestrator | 00:02:14.960 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=7259e5ef-cb2b-48b8-83b6-adb07dc2104f] 2025-09-16 00:02:14.991865 | orchestrator | 00:02:14.991 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=739bf0de-0a4a-4ac5-94da-3fe3facc067c] 2025-09-16 00:02:15.225187 | orchestrator | 00:02:15.224 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=2856c299-5b9b-4cef-b5c7-f7756a5b989d] 2025-09-16 00:02:24.220512 | orchestrator | 00:02:24.220 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2025-09-16 00:02:25.560572 | orchestrator | 00:02:25.560 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 42s [id=a7755162-82f8-442f-9fe0-2284a32a6430] 2025-09-16 00:02:25.583761 | orchestrator | 00:02:25.583 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-16 00:02:25.590804 | orchestrator | 00:02:25.590 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=4753860960550497572] 2025-09-16 00:02:25.592974 | orchestrator | 00:02:25.592 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-16 00:02:25.593352 | orchestrator | 00:02:25.593 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-16 00:02:25.597657 | orchestrator | 00:02:25.597 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-16 00:02:25.610444 | orchestrator | 00:02:25.610 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-16 00:02:25.610685 | orchestrator | 00:02:25.610 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-16 00:02:25.611136 | orchestrator | 00:02:25.611 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-16 00:02:25.614196 | orchestrator | 00:02:25.614 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-16 00:02:25.615917 | orchestrator | 00:02:25.615 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-16 00:02:25.615947 | orchestrator | 00:02:25.615 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-16 00:02:25.623490 | orchestrator | 00:02:25.623 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-16 00:02:29.842923 | orchestrator | 00:02:29.842 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=739bf0de-0a4a-4ac5-94da-3fe3facc067c/46481bd5-1fc4-4619-9f81-82a2d5c944be] 2025-09-16 00:02:29.845651 | orchestrator | 00:02:29.845 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=a7755162-82f8-442f-9fe0-2284a32a6430/6b7c66eb-e150-40bb-863f-cd4924cbb0ab] 2025-09-16 00:02:29.861745 | orchestrator | 00:02:29.861 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=69608a15-c670-4cb8-a5d6-a323167bbc69/ad9de541-7002-4a51-9253-a212a9f46ca2] 2025-09-16 00:02:29.892157 | orchestrator | 00:02:29.891 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=a7755162-82f8-442f-9fe0-2284a32a6430/ebe7fd99-ddf0-4119-8dea-cb8b427f2aed] 2025-09-16 00:02:29.894934 | orchestrator | 00:02:29.894 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=739bf0de-0a4a-4ac5-94da-3fe3facc067c/da9e83cb-2e5e-4388-ad73-1879a24665a3] 2025-09-16 00:02:29.923120 | orchestrator | 00:02:29.922 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=69608a15-c670-4cb8-a5d6-a323167bbc69/f8c86b93-6440-4cc6-ba3c-00ae05f2a443] 2025-09-16 00:02:35.616006 | orchestrator | 00:02:35.615 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-16 00:02:35.616150 | orchestrator | 00:02:35.615 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Still creating... [10s elapsed] 2025-09-16 00:02:35.618145 | orchestrator | 00:02:35.617 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Still creating... [10s elapsed] 2025-09-16 00:02:35.624263 | orchestrator | 00:02:35.624 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Still creating... [10s elapsed] 2025-09-16 00:02:36.018142 | orchestrator | 00:02:36.013 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=739bf0de-0a4a-4ac5-94da-3fe3facc067c/5c63af0b-1be6-4a9c-8f35-a4445080f1db] 2025-09-16 00:02:36.034053 | orchestrator | 00:02:36.033 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=69608a15-c670-4cb8-a5d6-a323167bbc69/a99d92e2-a7d0-4115-a3b5-db7bfa0170a9] 2025-09-16 00:02:36.040310 | orchestrator | 00:02:36.040 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=a7755162-82f8-442f-9fe0-2284a32a6430/216f9756-46fe-48b3-8a57-6cc5b7e0c275] 2025-09-16 00:02:45.617273 | orchestrator | 00:02:45.616 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-16 00:02:45.960246 | orchestrator | 00:02:45.959 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=52b2bca8-e38e-41b5-a0fe-c4503f4cedf4] 2025-09-16 00:02:45.982794 | orchestrator | 00:02:45.982 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-16 00:02:45.982858 | orchestrator | 00:02:45.982 STDOUT terraform: Outputs: 2025-09-16 00:02:45.982869 | orchestrator | 00:02:45.982 STDOUT terraform: manager_address = 2025-09-16 00:02:45.982887 | orchestrator | 00:02:45.982 STDOUT terraform: private_key = 2025-09-16 00:02:46.226486 | orchestrator | ok: Runtime: 0:01:20.926012 2025-09-16 00:02:46.267290 | 2025-09-16 00:02:46.267481 | TASK [Create infrastructure (stable)] 2025-09-16 00:02:46.802745 | orchestrator | skipping: Conditional result was False 2025-09-16 00:02:46.820637 | 2025-09-16 00:02:46.820802 | TASK [Fetch manager address] 2025-09-16 00:02:47.237471 | orchestrator | ok 2025-09-16 00:02:47.248238 | 2025-09-16 00:02:47.248379 | TASK [Set manager_host address] 2025-09-16 00:02:47.329025 | orchestrator | ok 2025-09-16 00:02:47.338638 | 2025-09-16 00:02:47.338757 | LOOP [Update ansible collections] 2025-09-16 00:02:48.063422 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-16 00:02:48.063872 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-16 00:02:48.063940 | orchestrator | Starting galaxy collection install process 2025-09-16 00:02:48.063984 | orchestrator | Process install dependency map 2025-09-16 00:02:48.064023 | orchestrator | Starting collection install process 2025-09-16 00:02:48.064058 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-09-16 00:02:48.064100 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-09-16 00:02:48.064142 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-16 00:02:48.064219 | orchestrator | ok: Item: commons Runtime: 0:00:00.425763 2025-09-16 00:02:48.774688 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-16 00:02:48.774931 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-16 00:02:48.774986 | orchestrator | Starting galaxy collection install process 2025-09-16 00:02:48.775025 | orchestrator | Process install dependency map 2025-09-16 00:02:48.775060 | orchestrator | Starting collection install process 2025-09-16 00:02:48.775093 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-09-16 00:02:48.775126 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-09-16 00:02:48.775158 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-16 00:02:48.775214 | orchestrator | ok: Item: services Runtime: 0:00:00.481714 2025-09-16 00:02:48.797948 | 2025-09-16 00:02:48.798113 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-16 00:02:59.346808 | orchestrator | ok 2025-09-16 00:02:59.359748 | 2025-09-16 00:02:59.359875 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-16 00:03:59.412087 | orchestrator | ok 2025-09-16 00:03:59.422042 | 2025-09-16 00:03:59.422158 | TASK [Fetch manager ssh hostkey] 2025-09-16 00:04:00.990313 | orchestrator | Output suppressed because no_log was given 2025-09-16 00:04:00.998056 | 2025-09-16 00:04:00.998183 | TASK [Get ssh keypair from terraform environment] 2025-09-16 00:04:01.533487 | orchestrator | ok: Runtime: 0:00:00.008905 2025-09-16 00:04:01.549037 | 2025-09-16 00:04:01.549194 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-16 00:04:01.588689 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-16 00:04:01.596534 | 2025-09-16 00:04:01.596667 | TASK [Run manager part 0] 2025-09-16 00:04:02.466891 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-16 00:04:02.515056 | orchestrator | 2025-09-16 00:04:02.515189 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-16 00:04:02.515197 | orchestrator | 2025-09-16 00:04:02.515209 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-16 00:04:04.339315 | orchestrator | ok: [testbed-manager] 2025-09-16 00:04:04.339373 | orchestrator | 2025-09-16 00:04:04.339396 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-16 00:04:04.339405 | orchestrator | 2025-09-16 00:04:04.339414 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-16 00:04:06.180901 | orchestrator | ok: [testbed-manager] 2025-09-16 00:04:06.181090 | orchestrator | 2025-09-16 00:04:06.181119 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-16 00:04:06.806386 | orchestrator | ok: [testbed-manager] 2025-09-16 00:04:06.806472 | orchestrator | 2025-09-16 00:04:06.806489 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-16 00:04:06.849104 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:04:06.849144 | orchestrator | 2025-09-16 00:04:06.849152 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-16 00:04:06.879479 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:04:06.879520 | orchestrator | 2025-09-16 00:04:06.879527 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-16 00:04:06.901095 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:04:06.901124 | orchestrator | 2025-09-16 00:04:06.901130 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-16 00:04:06.922468 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:04:06.922496 | orchestrator | 2025-09-16 00:04:06.922501 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-16 00:04:06.944024 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:04:06.944051 | orchestrator | 2025-09-16 00:04:06.944057 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-16 00:04:06.969818 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:04:06.969843 | orchestrator | 2025-09-16 00:04:06.969849 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-16 00:04:06.992702 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:04:06.992740 | orchestrator | 2025-09-16 00:04:06.992746 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-16 00:04:07.733743 | orchestrator | changed: [testbed-manager] 2025-09-16 00:04:07.733788 | orchestrator | 2025-09-16 00:04:07.733796 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-16 00:06:43.495487 | orchestrator | changed: [testbed-manager] 2025-09-16 00:06:43.495536 | orchestrator | 2025-09-16 00:06:43.495546 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-16 00:08:03.640610 | orchestrator | changed: [testbed-manager] 2025-09-16 00:08:03.640659 | orchestrator | 2025-09-16 00:08:03.640669 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-16 00:08:25.809154 | orchestrator | changed: [testbed-manager] 2025-09-16 00:08:25.809234 | orchestrator | 2025-09-16 00:08:25.809250 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-16 00:08:34.558416 | orchestrator | changed: [testbed-manager] 2025-09-16 00:08:34.558461 | orchestrator | 2025-09-16 00:08:34.558470 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-16 00:08:34.601110 | orchestrator | ok: [testbed-manager] 2025-09-16 00:08:34.601145 | orchestrator | 2025-09-16 00:08:34.601152 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-16 00:08:35.351566 | orchestrator | ok: [testbed-manager] 2025-09-16 00:08:35.351607 | orchestrator | 2025-09-16 00:08:35.351618 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-16 00:08:36.046884 | orchestrator | changed: [testbed-manager] 2025-09-16 00:08:36.046966 | orchestrator | 2025-09-16 00:08:36.046982 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-16 00:08:42.861623 | orchestrator | changed: [testbed-manager] 2025-09-16 00:08:42.861689 | orchestrator | 2025-09-16 00:08:42.861715 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-16 00:08:48.654723 | orchestrator | changed: [testbed-manager] 2025-09-16 00:08:48.654856 | orchestrator | 2025-09-16 00:08:48.654875 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-16 00:08:51.247859 | orchestrator | changed: [testbed-manager] 2025-09-16 00:08:51.247944 | orchestrator | 2025-09-16 00:08:51.247959 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-16 00:08:52.965309 | orchestrator | changed: [testbed-manager] 2025-09-16 00:08:52.965375 | orchestrator | 2025-09-16 00:08:52.965387 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-16 00:08:54.072072 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-16 00:08:54.072165 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-16 00:08:54.072179 | orchestrator | 2025-09-16 00:08:54.072192 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-16 00:08:54.152443 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-16 00:08:54.152520 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-16 00:08:54.152533 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-16 00:08:54.152545 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-16 00:08:57.237595 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-16 00:08:57.237683 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-16 00:08:57.237698 | orchestrator | 2025-09-16 00:08:57.237711 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-16 00:08:57.786436 | orchestrator | changed: [testbed-manager] 2025-09-16 00:08:57.786519 | orchestrator | 2025-09-16 00:08:57.786534 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-16 00:12:20.368599 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-16 00:12:20.368651 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-16 00:12:20.368662 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-16 00:12:20.368669 | orchestrator | 2025-09-16 00:12:20.368677 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-16 00:12:22.651586 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-16 00:12:22.651623 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-16 00:12:22.651627 | orchestrator | 2025-09-16 00:12:22.651632 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-16 00:12:22.651637 | orchestrator | 2025-09-16 00:12:22.651642 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-16 00:12:24.003715 | orchestrator | ok: [testbed-manager] 2025-09-16 00:12:24.003795 | orchestrator | 2025-09-16 00:12:24.003806 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-16 00:12:24.043759 | orchestrator | ok: [testbed-manager] 2025-09-16 00:12:24.043808 | orchestrator | 2025-09-16 00:12:24.043814 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-16 00:12:24.103326 | orchestrator | ok: [testbed-manager] 2025-09-16 00:12:24.103407 | orchestrator | 2025-09-16 00:12:24.103422 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-16 00:12:24.888863 | orchestrator | changed: [testbed-manager] 2025-09-16 00:12:24.888960 | orchestrator | 2025-09-16 00:12:24.888977 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-16 00:12:25.578775 | orchestrator | changed: [testbed-manager] 2025-09-16 00:12:25.578806 | orchestrator | 2025-09-16 00:12:25.578812 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-16 00:12:26.847721 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-16 00:12:26.847811 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-16 00:12:26.847829 | orchestrator | 2025-09-16 00:12:26.847854 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-16 00:12:28.142926 | orchestrator | changed: [testbed-manager] 2025-09-16 00:12:28.142998 | orchestrator | 2025-09-16 00:12:28.143009 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-16 00:12:29.759710 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-16 00:12:29.760463 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-16 00:12:29.760517 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-16 00:12:29.760531 | orchestrator | 2025-09-16 00:12:29.760544 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-16 00:12:29.814482 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:12:29.814553 | orchestrator | 2025-09-16 00:12:29.814568 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-16 00:12:30.527180 | orchestrator | changed: [testbed-manager] 2025-09-16 00:12:30.527263 | orchestrator | 2025-09-16 00:12:30.527281 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-16 00:12:30.591260 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:12:30.591299 | orchestrator | 2025-09-16 00:12:30.591304 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-16 00:12:31.413530 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-16 00:12:31.413588 | orchestrator | changed: [testbed-manager] 2025-09-16 00:12:31.413602 | orchestrator | 2025-09-16 00:12:31.413615 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-16 00:12:31.449499 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:12:31.449547 | orchestrator | 2025-09-16 00:12:31.449560 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-16 00:12:31.484229 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:12:31.484270 | orchestrator | 2025-09-16 00:12:31.484283 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-16 00:12:31.509811 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:12:31.509852 | orchestrator | 2025-09-16 00:12:31.509866 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-16 00:12:31.546066 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:12:31.546110 | orchestrator | 2025-09-16 00:12:31.546125 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-16 00:12:32.234166 | orchestrator | ok: [testbed-manager] 2025-09-16 00:12:32.234291 | orchestrator | 2025-09-16 00:12:32.234308 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-16 00:12:32.234320 | orchestrator | 2025-09-16 00:12:32.234331 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-16 00:12:33.609397 | orchestrator | ok: [testbed-manager] 2025-09-16 00:12:33.609471 | orchestrator | 2025-09-16 00:12:33.609483 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-16 00:12:34.525949 | orchestrator | changed: [testbed-manager] 2025-09-16 00:12:34.525981 | orchestrator | 2025-09-16 00:12:34.525986 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:12:34.525992 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-16 00:12:34.525996 | orchestrator | 2025-09-16 00:12:34.955141 | orchestrator | ok: Runtime: 0:08:32.708966 2025-09-16 00:12:34.971813 | 2025-09-16 00:12:34.971938 | TASK [Point out that the log in on the manager is now possible] 2025-09-16 00:12:35.019717 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-16 00:12:35.029824 | 2025-09-16 00:12:35.029942 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-16 00:12:35.072615 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-16 00:12:35.081271 | 2025-09-16 00:12:35.081384 | TASK [Run manager part 1 + 2] 2025-09-16 00:12:35.907700 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-16 00:12:35.964711 | orchestrator | 2025-09-16 00:12:35.964777 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-16 00:12:35.964784 | orchestrator | 2025-09-16 00:12:35.964796 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-16 00:12:38.636011 | orchestrator | ok: [testbed-manager] 2025-09-16 00:12:38.636119 | orchestrator | 2025-09-16 00:12:38.636141 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-16 00:12:38.658881 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:12:38.658912 | orchestrator | 2025-09-16 00:12:38.658919 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-16 00:12:38.692477 | orchestrator | ok: [testbed-manager] 2025-09-16 00:12:38.692516 | orchestrator | 2025-09-16 00:12:38.692526 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-16 00:12:38.723509 | orchestrator | ok: [testbed-manager] 2025-09-16 00:12:38.723543 | orchestrator | 2025-09-16 00:12:38.723551 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-16 00:12:38.781272 | orchestrator | ok: [testbed-manager] 2025-09-16 00:12:38.781404 | orchestrator | 2025-09-16 00:12:38.781416 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-16 00:12:38.846087 | orchestrator | ok: [testbed-manager] 2025-09-16 00:12:38.846122 | orchestrator | 2025-09-16 00:12:38.846131 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-16 00:12:38.883616 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-16 00:12:38.883639 | orchestrator | 2025-09-16 00:12:38.883644 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-16 00:12:39.533267 | orchestrator | ok: [testbed-manager] 2025-09-16 00:12:39.533308 | orchestrator | 2025-09-16 00:12:39.533317 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-16 00:12:39.583220 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:12:39.583263 | orchestrator | 2025-09-16 00:12:39.583272 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-16 00:12:40.810116 | orchestrator | changed: [testbed-manager] 2025-09-16 00:12:40.810162 | orchestrator | 2025-09-16 00:12:40.810172 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-16 00:12:41.315028 | orchestrator | ok: [testbed-manager] 2025-09-16 00:12:41.315070 | orchestrator | 2025-09-16 00:12:41.315078 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-16 00:12:42.347222 | orchestrator | changed: [testbed-manager] 2025-09-16 00:12:42.347265 | orchestrator | 2025-09-16 00:12:42.347275 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-16 00:12:59.130343 | orchestrator | changed: [testbed-manager] 2025-09-16 00:12:59.130407 | orchestrator | 2025-09-16 00:12:59.130422 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-16 00:12:59.789446 | orchestrator | ok: [testbed-manager] 2025-09-16 00:12:59.789531 | orchestrator | 2025-09-16 00:12:59.789549 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-16 00:12:59.841120 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:12:59.841204 | orchestrator | 2025-09-16 00:12:59.841219 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-16 00:13:00.744848 | orchestrator | changed: [testbed-manager] 2025-09-16 00:13:00.745018 | orchestrator | 2025-09-16 00:13:00.745035 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-16 00:13:01.679261 | orchestrator | changed: [testbed-manager] 2025-09-16 00:13:01.679302 | orchestrator | 2025-09-16 00:13:01.679311 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-16 00:13:02.233098 | orchestrator | changed: [testbed-manager] 2025-09-16 00:13:02.233182 | orchestrator | 2025-09-16 00:13:02.233198 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-16 00:13:02.275502 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-16 00:13:02.275590 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-16 00:13:02.275604 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-16 00:13:02.275617 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-16 00:13:04.166807 | orchestrator | changed: [testbed-manager] 2025-09-16 00:13:04.166876 | orchestrator | 2025-09-16 00:13:04.166885 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-16 00:13:13.333066 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-16 00:13:13.333155 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-16 00:13:13.333172 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-16 00:13:13.333183 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-16 00:13:13.333202 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-16 00:13:13.333213 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-16 00:13:13.333225 | orchestrator | 2025-09-16 00:13:13.333238 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-16 00:13:14.346788 | orchestrator | changed: [testbed-manager] 2025-09-16 00:13:14.346873 | orchestrator | 2025-09-16 00:13:14.346889 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-16 00:13:14.391201 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:13:14.391263 | orchestrator | 2025-09-16 00:13:14.391277 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-16 00:13:17.480918 | orchestrator | changed: [testbed-manager] 2025-09-16 00:13:17.481043 | orchestrator | 2025-09-16 00:13:17.481061 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-16 00:13:17.525389 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:13:17.525454 | orchestrator | 2025-09-16 00:13:17.525467 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-16 00:14:51.381330 | orchestrator | changed: [testbed-manager] 2025-09-16 00:14:51.381442 | orchestrator | 2025-09-16 00:14:51.381462 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-16 00:14:52.507517 | orchestrator | ok: [testbed-manager] 2025-09-16 00:14:52.507593 | orchestrator | 2025-09-16 00:14:52.507609 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:14:52.507623 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-16 00:14:52.507635 | orchestrator | 2025-09-16 00:14:52.689372 | orchestrator | ok: Runtime: 0:02:17.212815 2025-09-16 00:14:52.706671 | 2025-09-16 00:14:52.706898 | TASK [Reboot manager] 2025-09-16 00:14:54.246590 | orchestrator | ok: Runtime: 0:00:00.957157 2025-09-16 00:14:54.262548 | 2025-09-16 00:14:54.262687 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-16 00:15:08.117397 | orchestrator | ok 2025-09-16 00:15:08.128376 | 2025-09-16 00:15:08.128504 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-16 00:16:08.170174 | orchestrator | ok 2025-09-16 00:16:08.180706 | 2025-09-16 00:16:08.180846 | TASK [Deploy manager + bootstrap nodes] 2025-09-16 00:16:10.684608 | orchestrator | 2025-09-16 00:16:10.684832 | orchestrator | # DEPLOY MANAGER 2025-09-16 00:16:10.684866 | orchestrator | 2025-09-16 00:16:10.684880 | orchestrator | + set -e 2025-09-16 00:16:10.684892 | orchestrator | + echo 2025-09-16 00:16:10.684905 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-16 00:16:10.684921 | orchestrator | + echo 2025-09-16 00:16:10.684968 | orchestrator | + cat /opt/manager-vars.sh 2025-09-16 00:16:10.688120 | orchestrator | export NUMBER_OF_NODES=6 2025-09-16 00:16:10.688175 | orchestrator | 2025-09-16 00:16:10.688181 | orchestrator | export CEPH_VERSION=reef 2025-09-16 00:16:10.688188 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-16 00:16:10.688193 | orchestrator | export MANAGER_VERSION=latest 2025-09-16 00:16:10.688215 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-16 00:16:10.688219 | orchestrator | 2025-09-16 00:16:10.688226 | orchestrator | export ARA=false 2025-09-16 00:16:10.688231 | orchestrator | export DEPLOY_MODE=manager 2025-09-16 00:16:10.688238 | orchestrator | export TEMPEST=true 2025-09-16 00:16:10.688243 | orchestrator | export IS_ZUUL=true 2025-09-16 00:16:10.688247 | orchestrator | 2025-09-16 00:16:10.688253 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.163 2025-09-16 00:16:10.688258 | orchestrator | export EXTERNAL_API=false 2025-09-16 00:16:10.688262 | orchestrator | 2025-09-16 00:16:10.688266 | orchestrator | export IMAGE_USER=ubuntu 2025-09-16 00:16:10.688272 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-16 00:16:10.688276 | orchestrator | 2025-09-16 00:16:10.688280 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-16 00:16:10.688438 | orchestrator | 2025-09-16 00:16:10.688445 | orchestrator | + echo 2025-09-16 00:16:10.688452 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-16 00:16:10.689011 | orchestrator | ++ export INTERACTIVE=false 2025-09-16 00:16:10.689017 | orchestrator | ++ INTERACTIVE=false 2025-09-16 00:16:10.689022 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-16 00:16:10.689026 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-16 00:16:10.689187 | orchestrator | + source /opt/manager-vars.sh 2025-09-16 00:16:10.689193 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-16 00:16:10.689197 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-16 00:16:10.689201 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-16 00:16:10.689205 | orchestrator | ++ CEPH_VERSION=reef 2025-09-16 00:16:10.689209 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-16 00:16:10.689212 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-16 00:16:10.689218 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-16 00:16:10.689222 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-16 00:16:10.689226 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-16 00:16:10.689234 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-16 00:16:10.689299 | orchestrator | ++ export ARA=false 2025-09-16 00:16:10.689305 | orchestrator | ++ ARA=false 2025-09-16 00:16:10.689309 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-16 00:16:10.689313 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-16 00:16:10.689317 | orchestrator | ++ export TEMPEST=true 2025-09-16 00:16:10.689321 | orchestrator | ++ TEMPEST=true 2025-09-16 00:16:10.689324 | orchestrator | ++ export IS_ZUUL=true 2025-09-16 00:16:10.689328 | orchestrator | ++ IS_ZUUL=true 2025-09-16 00:16:10.689372 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.163 2025-09-16 00:16:10.689377 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.163 2025-09-16 00:16:10.689381 | orchestrator | ++ export EXTERNAL_API=false 2025-09-16 00:16:10.689385 | orchestrator | ++ EXTERNAL_API=false 2025-09-16 00:16:10.689388 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-16 00:16:10.689392 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-16 00:16:10.689396 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-16 00:16:10.689400 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-16 00:16:10.689403 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-16 00:16:10.689407 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-16 00:16:10.689413 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-16 00:16:10.744358 | orchestrator | + docker version 2025-09-16 00:16:11.023825 | orchestrator | Client: Docker Engine - Community 2025-09-16 00:16:11.023931 | orchestrator | Version: 27.5.1 2025-09-16 00:16:11.023951 | orchestrator | API version: 1.47 2025-09-16 00:16:11.023972 | orchestrator | Go version: go1.22.11 2025-09-16 00:16:11.023995 | orchestrator | Git commit: 9f9e405 2025-09-16 00:16:11.024015 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-16 00:16:11.024033 | orchestrator | OS/Arch: linux/amd64 2025-09-16 00:16:11.024051 | orchestrator | Context: default 2025-09-16 00:16:11.024069 | orchestrator | 2025-09-16 00:16:11.024081 | orchestrator | Server: Docker Engine - Community 2025-09-16 00:16:11.024092 | orchestrator | Engine: 2025-09-16 00:16:11.024101 | orchestrator | Version: 27.5.1 2025-09-16 00:16:11.024112 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-16 00:16:11.024152 | orchestrator | Go version: go1.22.11 2025-09-16 00:16:11.024163 | orchestrator | Git commit: 4c9b3b0 2025-09-16 00:16:11.024172 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-16 00:16:11.024182 | orchestrator | OS/Arch: linux/amd64 2025-09-16 00:16:11.024192 | orchestrator | Experimental: false 2025-09-16 00:16:11.024201 | orchestrator | containerd: 2025-09-16 00:16:11.024211 | orchestrator | Version: 1.7.27 2025-09-16 00:16:11.024221 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-16 00:16:11.024231 | orchestrator | runc: 2025-09-16 00:16:11.024241 | orchestrator | Version: 1.2.5 2025-09-16 00:16:11.024251 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-16 00:16:11.024260 | orchestrator | docker-init: 2025-09-16 00:16:11.024270 | orchestrator | Version: 0.19.0 2025-09-16 00:16:11.024280 | orchestrator | GitCommit: de40ad0 2025-09-16 00:16:11.028446 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-16 00:16:11.037524 | orchestrator | + set -e 2025-09-16 00:16:11.037561 | orchestrator | + source /opt/manager-vars.sh 2025-09-16 00:16:11.037572 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-16 00:16:11.037583 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-16 00:16:11.037593 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-16 00:16:11.037611 | orchestrator | ++ CEPH_VERSION=reef 2025-09-16 00:16:11.037628 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-16 00:16:11.037645 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-16 00:16:11.037776 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-16 00:16:11.037792 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-16 00:16:11.037802 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-16 00:16:11.037811 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-16 00:16:11.037826 | orchestrator | ++ export ARA=false 2025-09-16 00:16:11.037836 | orchestrator | ++ ARA=false 2025-09-16 00:16:11.037846 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-16 00:16:11.037856 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-16 00:16:11.037865 | orchestrator | ++ export TEMPEST=true 2025-09-16 00:16:11.037875 | orchestrator | ++ TEMPEST=true 2025-09-16 00:16:11.037885 | orchestrator | ++ export IS_ZUUL=true 2025-09-16 00:16:11.037894 | orchestrator | ++ IS_ZUUL=true 2025-09-16 00:16:11.037904 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.163 2025-09-16 00:16:11.037914 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.163 2025-09-16 00:16:11.037923 | orchestrator | ++ export EXTERNAL_API=false 2025-09-16 00:16:11.037933 | orchestrator | ++ EXTERNAL_API=false 2025-09-16 00:16:11.037942 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-16 00:16:11.037952 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-16 00:16:11.037961 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-16 00:16:11.037970 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-16 00:16:11.037981 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-16 00:16:11.037990 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-16 00:16:11.038000 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-16 00:16:11.038013 | orchestrator | ++ export INTERACTIVE=false 2025-09-16 00:16:11.038080 | orchestrator | ++ INTERACTIVE=false 2025-09-16 00:16:11.038090 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-16 00:16:11.038104 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-16 00:16:11.038117 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-16 00:16:11.038218 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-16 00:16:11.038231 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-16 00:16:11.044783 | orchestrator | + set -e 2025-09-16 00:16:11.044815 | orchestrator | + VERSION=reef 2025-09-16 00:16:11.045909 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-16 00:16:11.050646 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-16 00:16:11.050664 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-16 00:16:11.056118 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-16 00:16:11.062504 | orchestrator | + set -e 2025-09-16 00:16:11.062528 | orchestrator | + VERSION=2024.2 2025-09-16 00:16:11.062964 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-16 00:16:11.066637 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-16 00:16:11.066655 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-16 00:16:11.072020 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-16 00:16:11.072841 | orchestrator | ++ semver latest 7.0.0 2025-09-16 00:16:11.135095 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-16 00:16:11.135129 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-16 00:16:11.135143 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-16 00:16:11.135155 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-16 00:16:11.224239 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-16 00:16:11.228895 | orchestrator | + source /opt/venv/bin/activate 2025-09-16 00:16:11.229890 | orchestrator | ++ deactivate nondestructive 2025-09-16 00:16:11.229912 | orchestrator | ++ '[' -n '' ']' 2025-09-16 00:16:11.229927 | orchestrator | ++ '[' -n '' ']' 2025-09-16 00:16:11.229939 | orchestrator | ++ hash -r 2025-09-16 00:16:11.230150 | orchestrator | ++ '[' -n '' ']' 2025-09-16 00:16:11.230166 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-16 00:16:11.230177 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-16 00:16:11.230188 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-16 00:16:11.230199 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-16 00:16:11.230211 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-16 00:16:11.230441 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-16 00:16:11.230456 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-16 00:16:11.230472 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-16 00:16:11.230484 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-16 00:16:11.230495 | orchestrator | ++ export PATH 2025-09-16 00:16:11.230899 | orchestrator | ++ '[' -n '' ']' 2025-09-16 00:16:11.230914 | orchestrator | ++ '[' -z '' ']' 2025-09-16 00:16:11.230925 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-16 00:16:11.230935 | orchestrator | ++ PS1='(venv) ' 2025-09-16 00:16:11.230946 | orchestrator | ++ export PS1 2025-09-16 00:16:11.230957 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-16 00:16:11.230972 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-16 00:16:11.230983 | orchestrator | ++ hash -r 2025-09-16 00:16:11.231013 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-16 00:16:12.540216 | orchestrator | 2025-09-16 00:16:12.540316 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-16 00:16:12.540333 | orchestrator | 2025-09-16 00:16:12.540344 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-16 00:16:13.102129 | orchestrator | ok: [testbed-manager] 2025-09-16 00:16:13.102228 | orchestrator | 2025-09-16 00:16:13.102244 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-16 00:16:14.096261 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:14.096364 | orchestrator | 2025-09-16 00:16:14.096378 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-16 00:16:14.096390 | orchestrator | 2025-09-16 00:16:14.096400 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-16 00:16:16.319562 | orchestrator | ok: [testbed-manager] 2025-09-16 00:16:16.319708 | orchestrator | 2025-09-16 00:16:16.319731 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-16 00:16:16.367309 | orchestrator | ok: [testbed-manager] 2025-09-16 00:16:16.367390 | orchestrator | 2025-09-16 00:16:16.367407 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-16 00:16:16.845569 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:16.845665 | orchestrator | 2025-09-16 00:16:16.845680 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-16 00:16:16.877726 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:16:16.877803 | orchestrator | 2025-09-16 00:16:16.877816 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-16 00:16:17.219052 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:17.219147 | orchestrator | 2025-09-16 00:16:17.219162 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-16 00:16:17.273530 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:16:17.273601 | orchestrator | 2025-09-16 00:16:17.273615 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-16 00:16:17.591278 | orchestrator | ok: [testbed-manager] 2025-09-16 00:16:17.591379 | orchestrator | 2025-09-16 00:16:17.591396 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-16 00:16:17.702342 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:16:17.702399 | orchestrator | 2025-09-16 00:16:17.702413 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-16 00:16:17.702425 | orchestrator | 2025-09-16 00:16:17.702439 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-16 00:16:19.360799 | orchestrator | ok: [testbed-manager] 2025-09-16 00:16:19.360894 | orchestrator | 2025-09-16 00:16:19.360909 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-16 00:16:19.478744 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-16 00:16:19.478843 | orchestrator | 2025-09-16 00:16:19.478856 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-16 00:16:19.549579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-16 00:16:19.549633 | orchestrator | 2025-09-16 00:16:19.549649 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-16 00:16:20.651953 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-16 00:16:20.652051 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-16 00:16:20.652066 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-16 00:16:20.652078 | orchestrator | 2025-09-16 00:16:20.652091 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-16 00:16:22.457118 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-16 00:16:22.457230 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-16 00:16:22.457248 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-16 00:16:22.457261 | orchestrator | 2025-09-16 00:16:22.457273 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-16 00:16:23.098865 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-16 00:16:23.098959 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:23.098974 | orchestrator | 2025-09-16 00:16:23.098987 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-16 00:16:23.732833 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-16 00:16:23.732931 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:23.732947 | orchestrator | 2025-09-16 00:16:23.732960 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-16 00:16:23.786497 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:16:23.786576 | orchestrator | 2025-09-16 00:16:23.786590 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-16 00:16:24.146072 | orchestrator | ok: [testbed-manager] 2025-09-16 00:16:24.146166 | orchestrator | 2025-09-16 00:16:24.146180 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-16 00:16:24.216873 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-16 00:16:24.216947 | orchestrator | 2025-09-16 00:16:24.216960 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-16 00:16:25.236161 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:25.236265 | orchestrator | 2025-09-16 00:16:25.236283 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-16 00:16:26.046746 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:26.046891 | orchestrator | 2025-09-16 00:16:26.046906 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-16 00:16:37.319966 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:37.320078 | orchestrator | 2025-09-16 00:16:37.320096 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-16 00:16:37.380501 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:16:37.380589 | orchestrator | 2025-09-16 00:16:37.380606 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-16 00:16:37.380619 | orchestrator | 2025-09-16 00:16:37.380631 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-16 00:16:39.168948 | orchestrator | ok: [testbed-manager] 2025-09-16 00:16:39.169042 | orchestrator | 2025-09-16 00:16:39.169082 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-16 00:16:39.276001 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-16 00:16:39.276067 | orchestrator | 2025-09-16 00:16:39.276080 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-16 00:16:39.330631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-16 00:16:39.330690 | orchestrator | 2025-09-16 00:16:39.330706 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-16 00:16:41.780414 | orchestrator | ok: [testbed-manager] 2025-09-16 00:16:41.780505 | orchestrator | 2025-09-16 00:16:41.780522 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-16 00:16:41.834245 | orchestrator | ok: [testbed-manager] 2025-09-16 00:16:41.834284 | orchestrator | 2025-09-16 00:16:41.834300 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-16 00:16:41.967435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-16 00:16:41.967510 | orchestrator | 2025-09-16 00:16:41.967521 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-16 00:16:44.877968 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-16 00:16:44.878166 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-16 00:16:44.878185 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-16 00:16:44.878198 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-16 00:16:44.878209 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-16 00:16:44.878221 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-16 00:16:44.878232 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-16 00:16:44.878243 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-16 00:16:44.878256 | orchestrator | 2025-09-16 00:16:44.878268 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-16 00:16:45.508091 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:45.508187 | orchestrator | 2025-09-16 00:16:45.508202 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-16 00:16:46.121872 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:46.121966 | orchestrator | 2025-09-16 00:16:46.121980 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-16 00:16:46.197046 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-16 00:16:46.197120 | orchestrator | 2025-09-16 00:16:46.197133 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-16 00:16:47.384326 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-16 00:16:47.384416 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-16 00:16:47.384431 | orchestrator | 2025-09-16 00:16:47.384443 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-16 00:16:48.011676 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:48.011826 | orchestrator | 2025-09-16 00:16:48.011844 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-16 00:16:48.057796 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:16:48.057856 | orchestrator | 2025-09-16 00:16:48.057875 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-16 00:16:48.128922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-16 00:16:48.129006 | orchestrator | 2025-09-16 00:16:48.129021 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-16 00:16:48.715776 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:48.715862 | orchestrator | 2025-09-16 00:16:48.715875 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-16 00:16:48.775859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-16 00:16:48.775980 | orchestrator | 2025-09-16 00:16:48.775996 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-16 00:16:50.139995 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-16 00:16:50.140118 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-16 00:16:50.140135 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:50.140149 | orchestrator | 2025-09-16 00:16:50.140174 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-16 00:16:50.764240 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:50.764342 | orchestrator | 2025-09-16 00:16:50.764359 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-16 00:16:50.816374 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:16:50.816453 | orchestrator | 2025-09-16 00:16:50.816469 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-16 00:16:50.903289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-16 00:16:50.903370 | orchestrator | 2025-09-16 00:16:50.903383 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-16 00:16:51.419839 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:51.419948 | orchestrator | 2025-09-16 00:16:51.419983 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-16 00:16:51.838511 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:51.838591 | orchestrator | 2025-09-16 00:16:51.838605 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-16 00:16:53.114323 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-16 00:16:53.114419 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-16 00:16:53.114432 | orchestrator | 2025-09-16 00:16:53.114442 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-16 00:16:53.766824 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:53.766923 | orchestrator | 2025-09-16 00:16:53.766941 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-16 00:16:54.158286 | orchestrator | ok: [testbed-manager] 2025-09-16 00:16:54.158382 | orchestrator | 2025-09-16 00:16:54.158397 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-16 00:16:54.518352 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:54.518459 | orchestrator | 2025-09-16 00:16:54.518476 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-16 00:16:54.572149 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:16:54.572241 | orchestrator | 2025-09-16 00:16:54.572264 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-16 00:16:54.650430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-16 00:16:54.650512 | orchestrator | 2025-09-16 00:16:54.650526 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-16 00:16:54.694101 | orchestrator | ok: [testbed-manager] 2025-09-16 00:16:54.694158 | orchestrator | 2025-09-16 00:16:54.694172 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-16 00:16:56.764417 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-16 00:16:56.764529 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-16 00:16:56.764546 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-16 00:16:56.764558 | orchestrator | 2025-09-16 00:16:56.764570 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-16 00:16:57.500556 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:57.500653 | orchestrator | 2025-09-16 00:16:57.500670 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-16 00:16:58.226685 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:58.226825 | orchestrator | 2025-09-16 00:16:58.226842 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-16 00:16:58.947159 | orchestrator | changed: [testbed-manager] 2025-09-16 00:16:58.947258 | orchestrator | 2025-09-16 00:16:58.947272 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-16 00:16:59.025983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-16 00:16:59.026111 | orchestrator | 2025-09-16 00:16:59.026124 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-16 00:16:59.071370 | orchestrator | ok: [testbed-manager] 2025-09-16 00:16:59.071395 | orchestrator | 2025-09-16 00:16:59.071406 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-16 00:16:59.813328 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-16 00:16:59.813424 | orchestrator | 2025-09-16 00:16:59.813439 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-16 00:16:59.886523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-16 00:16:59.886593 | orchestrator | 2025-09-16 00:16:59.886606 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-16 00:17:00.588105 | orchestrator | changed: [testbed-manager] 2025-09-16 00:17:00.588198 | orchestrator | 2025-09-16 00:17:00.588212 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-16 00:17:01.155962 | orchestrator | ok: [testbed-manager] 2025-09-16 00:17:01.156058 | orchestrator | 2025-09-16 00:17:01.156073 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-16 00:17:01.210877 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:17:01.210923 | orchestrator | 2025-09-16 00:17:01.210935 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-16 00:17:01.274709 | orchestrator | ok: [testbed-manager] 2025-09-16 00:17:01.274837 | orchestrator | 2025-09-16 00:17:01.274855 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-16 00:17:02.127513 | orchestrator | changed: [testbed-manager] 2025-09-16 00:17:02.127635 | orchestrator | 2025-09-16 00:17:02.127651 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-16 00:18:34.618597 | orchestrator | changed: [testbed-manager] 2025-09-16 00:18:34.618723 | orchestrator | 2025-09-16 00:18:34.618795 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-16 00:18:35.560546 | orchestrator | ok: [testbed-manager] 2025-09-16 00:18:35.560653 | orchestrator | 2025-09-16 00:18:35.560670 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-16 00:18:35.614209 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:18:35.614246 | orchestrator | 2025-09-16 00:18:35.614262 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-16 00:18:37.924095 | orchestrator | changed: [testbed-manager] 2025-09-16 00:18:37.924181 | orchestrator | 2025-09-16 00:18:37.924190 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-16 00:18:37.982483 | orchestrator | ok: [testbed-manager] 2025-09-16 00:18:37.982523 | orchestrator | 2025-09-16 00:18:37.982532 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-16 00:18:37.982540 | orchestrator | 2025-09-16 00:18:37.982547 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-16 00:18:38.026835 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:18:38.026871 | orchestrator | 2025-09-16 00:18:38.026881 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-16 00:19:38.076272 | orchestrator | Pausing for 60 seconds 2025-09-16 00:19:38.076379 | orchestrator | changed: [testbed-manager] 2025-09-16 00:19:38.076395 | orchestrator | 2025-09-16 00:19:38.076408 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-16 00:19:41.179416 | orchestrator | changed: [testbed-manager] 2025-09-16 00:19:41.179528 | orchestrator | 2025-09-16 00:19:41.179547 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-16 00:20:22.814346 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-16 00:20:22.814447 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-16 00:20:22.814464 | orchestrator | changed: [testbed-manager] 2025-09-16 00:20:22.814506 | orchestrator | 2025-09-16 00:20:22.814519 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-16 00:20:32.271821 | orchestrator | changed: [testbed-manager] 2025-09-16 00:20:32.271938 | orchestrator | 2025-09-16 00:20:32.271955 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-16 00:20:32.352169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-16 00:20:32.352278 | orchestrator | 2025-09-16 00:20:32.352300 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-16 00:20:32.352321 | orchestrator | 2025-09-16 00:20:32.352340 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-16 00:20:32.399946 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:20:32.400011 | orchestrator | 2025-09-16 00:20:32.400025 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:20:32.400039 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-16 00:20:32.400050 | orchestrator | 2025-09-16 00:20:32.498565 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-16 00:20:32.498637 | orchestrator | + deactivate 2025-09-16 00:20:32.498652 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-16 00:20:32.498666 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-16 00:20:32.498677 | orchestrator | + export PATH 2025-09-16 00:20:32.498689 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-16 00:20:32.498701 | orchestrator | + '[' -n '' ']' 2025-09-16 00:20:32.498712 | orchestrator | + hash -r 2025-09-16 00:20:32.498776 | orchestrator | + '[' -n '' ']' 2025-09-16 00:20:32.498789 | orchestrator | + unset VIRTUAL_ENV 2025-09-16 00:20:32.498801 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-16 00:20:32.498813 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-16 00:20:32.498824 | orchestrator | + unset -f deactivate 2025-09-16 00:20:32.498836 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-16 00:20:32.503975 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-16 00:20:32.504016 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-16 00:20:32.504028 | orchestrator | + local max_attempts=60 2025-09-16 00:20:32.504040 | orchestrator | + local name=ceph-ansible 2025-09-16 00:20:32.504051 | orchestrator | + local attempt_num=1 2025-09-16 00:20:32.505144 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-16 00:20:32.543551 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-16 00:20:32.543598 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-16 00:20:32.543611 | orchestrator | + local max_attempts=60 2025-09-16 00:20:32.543623 | orchestrator | + local name=kolla-ansible 2025-09-16 00:20:32.543634 | orchestrator | + local attempt_num=1 2025-09-16 00:20:32.544691 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-16 00:20:32.567037 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-16 00:20:32.567073 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-16 00:20:32.567085 | orchestrator | + local max_attempts=60 2025-09-16 00:20:32.567096 | orchestrator | + local name=osism-ansible 2025-09-16 00:20:32.567107 | orchestrator | + local attempt_num=1 2025-09-16 00:20:32.568047 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-16 00:20:32.598413 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-16 00:20:32.598475 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-16 00:20:32.598489 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-16 00:20:33.232119 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-16 00:20:33.455211 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-16 00:20:33.455302 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-16 00:20:33.455314 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-16 00:20:33.455345 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-16 00:20:33.455357 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-16 00:20:33.455376 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-16 00:20:33.455385 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-16 00:20:33.455394 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-09-16 00:20:33.455403 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-16 00:20:33.455411 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-16 00:20:33.455420 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-16 00:20:33.455429 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-16 00:20:33.455437 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-16 00:20:33.455446 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-16 00:20:33.455454 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-16 00:20:33.455463 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-16 00:20:33.462951 | orchestrator | ++ semver latest 7.0.0 2025-09-16 00:20:33.509193 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-16 00:20:33.509291 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-16 00:20:33.509308 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-16 00:20:33.512076 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-16 00:20:45.655403 | orchestrator | 2025-09-16 00:20:45 | INFO  | Task 0f30b838-afc6-4c6a-b9a7-59fddc2cd90b (resolvconf) was prepared for execution. 2025-09-16 00:20:45.655515 | orchestrator | 2025-09-16 00:20:45 | INFO  | It takes a moment until task 0f30b838-afc6-4c6a-b9a7-59fddc2cd90b (resolvconf) has been started and output is visible here. 2025-09-16 00:20:58.839379 | orchestrator | 2025-09-16 00:20:58.839495 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-16 00:20:58.839513 | orchestrator | 2025-09-16 00:20:58.839525 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-16 00:20:58.839561 | orchestrator | Tuesday 16 September 2025 00:20:49 +0000 (0:00:00.129) 0:00:00.129 ***** 2025-09-16 00:20:58.839574 | orchestrator | ok: [testbed-manager] 2025-09-16 00:20:58.839586 | orchestrator | 2025-09-16 00:20:58.839597 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-16 00:20:58.839609 | orchestrator | Tuesday 16 September 2025 00:20:52 +0000 (0:00:03.491) 0:00:03.621 ***** 2025-09-16 00:20:58.839620 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:20:58.839632 | orchestrator | 2025-09-16 00:20:58.839642 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-16 00:20:58.839653 | orchestrator | Tuesday 16 September 2025 00:20:53 +0000 (0:00:00.060) 0:00:03.682 ***** 2025-09-16 00:20:58.839664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-16 00:20:58.839676 | orchestrator | 2025-09-16 00:20:58.839687 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-16 00:20:58.839698 | orchestrator | Tuesday 16 September 2025 00:20:53 +0000 (0:00:00.088) 0:00:03.770 ***** 2025-09-16 00:20:58.839709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-16 00:20:58.839772 | orchestrator | 2025-09-16 00:20:58.839784 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-16 00:20:58.839795 | orchestrator | Tuesday 16 September 2025 00:20:53 +0000 (0:00:00.071) 0:00:03.841 ***** 2025-09-16 00:20:58.839806 | orchestrator | ok: [testbed-manager] 2025-09-16 00:20:58.839816 | orchestrator | 2025-09-16 00:20:58.839827 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-16 00:20:58.839838 | orchestrator | Tuesday 16 September 2025 00:20:54 +0000 (0:00:01.083) 0:00:04.924 ***** 2025-09-16 00:20:58.839848 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:20:58.839859 | orchestrator | 2025-09-16 00:20:58.839869 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-16 00:20:58.839880 | orchestrator | Tuesday 16 September 2025 00:20:54 +0000 (0:00:00.058) 0:00:04.983 ***** 2025-09-16 00:20:58.839891 | orchestrator | ok: [testbed-manager] 2025-09-16 00:20:58.839901 | orchestrator | 2025-09-16 00:20:58.839913 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-16 00:20:58.839925 | orchestrator | Tuesday 16 September 2025 00:20:54 +0000 (0:00:00.474) 0:00:05.457 ***** 2025-09-16 00:20:58.839937 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:20:58.839949 | orchestrator | 2025-09-16 00:20:58.839961 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-16 00:20:58.839974 | orchestrator | Tuesday 16 September 2025 00:20:54 +0000 (0:00:00.080) 0:00:05.538 ***** 2025-09-16 00:20:58.839986 | orchestrator | changed: [testbed-manager] 2025-09-16 00:20:58.839999 | orchestrator | 2025-09-16 00:20:58.840011 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-16 00:20:58.840023 | orchestrator | Tuesday 16 September 2025 00:20:55 +0000 (0:00:00.525) 0:00:06.063 ***** 2025-09-16 00:20:58.840034 | orchestrator | changed: [testbed-manager] 2025-09-16 00:20:58.840047 | orchestrator | 2025-09-16 00:20:58.840059 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-16 00:20:58.840070 | orchestrator | Tuesday 16 September 2025 00:20:56 +0000 (0:00:01.063) 0:00:07.127 ***** 2025-09-16 00:20:58.840083 | orchestrator | ok: [testbed-manager] 2025-09-16 00:20:58.840094 | orchestrator | 2025-09-16 00:20:58.840106 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-16 00:20:58.840118 | orchestrator | Tuesday 16 September 2025 00:20:57 +0000 (0:00:00.962) 0:00:08.090 ***** 2025-09-16 00:20:58.840140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-16 00:20:58.840161 | orchestrator | 2025-09-16 00:20:58.840174 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-16 00:20:58.840187 | orchestrator | Tuesday 16 September 2025 00:20:57 +0000 (0:00:00.092) 0:00:08.183 ***** 2025-09-16 00:20:58.840198 | orchestrator | changed: [testbed-manager] 2025-09-16 00:20:58.840210 | orchestrator | 2025-09-16 00:20:58.840222 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:20:58.840236 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-16 00:20:58.840248 | orchestrator | 2025-09-16 00:20:58.840260 | orchestrator | 2025-09-16 00:20:58.840270 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:20:58.840281 | orchestrator | Tuesday 16 September 2025 00:20:58 +0000 (0:00:01.108) 0:00:09.291 ***** 2025-09-16 00:20:58.840292 | orchestrator | =============================================================================== 2025-09-16 00:20:58.840302 | orchestrator | Gathering Facts --------------------------------------------------------- 3.49s 2025-09-16 00:20:58.840313 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.11s 2025-09-16 00:20:58.840323 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.08s 2025-09-16 00:20:58.840334 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.06s 2025-09-16 00:20:58.840344 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.96s 2025-09-16 00:20:58.840355 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2025-09-16 00:20:58.840385 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2025-09-16 00:20:58.840397 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-09-16 00:20:58.840407 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-09-16 00:20:58.840418 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-16 00:20:58.840429 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-09-16 00:20:58.840439 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-09-16 00:20:58.840450 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-09-16 00:20:59.102193 | orchestrator | + osism apply sshconfig 2025-09-16 00:21:11.059029 | orchestrator | 2025-09-16 00:21:11 | INFO  | Task 899ba9ea-e504-4785-89c6-a55eaf03537c (sshconfig) was prepared for execution. 2025-09-16 00:21:11.059148 | orchestrator | 2025-09-16 00:21:11 | INFO  | It takes a moment until task 899ba9ea-e504-4785-89c6-a55eaf03537c (sshconfig) has been started and output is visible here. 2025-09-16 00:21:22.611444 | orchestrator | 2025-09-16 00:21:22.611565 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-16 00:21:22.611582 | orchestrator | 2025-09-16 00:21:22.611595 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-16 00:21:22.611606 | orchestrator | Tuesday 16 September 2025 00:21:14 +0000 (0:00:00.161) 0:00:00.161 ***** 2025-09-16 00:21:22.611618 | orchestrator | ok: [testbed-manager] 2025-09-16 00:21:22.611630 | orchestrator | 2025-09-16 00:21:22.611641 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-16 00:21:22.611652 | orchestrator | Tuesday 16 September 2025 00:21:15 +0000 (0:00:00.587) 0:00:00.749 ***** 2025-09-16 00:21:22.611663 | orchestrator | changed: [testbed-manager] 2025-09-16 00:21:22.611674 | orchestrator | 2025-09-16 00:21:22.611686 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-16 00:21:22.611698 | orchestrator | Tuesday 16 September 2025 00:21:16 +0000 (0:00:00.505) 0:00:01.254 ***** 2025-09-16 00:21:22.611709 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-16 00:21:22.611773 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-16 00:21:22.611808 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-16 00:21:22.611819 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-16 00:21:22.611830 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-16 00:21:22.611860 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-16 00:21:22.611871 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-16 00:21:22.611882 | orchestrator | 2025-09-16 00:21:22.611893 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-16 00:21:22.611904 | orchestrator | Tuesday 16 September 2025 00:21:21 +0000 (0:00:05.689) 0:00:06.944 ***** 2025-09-16 00:21:22.611914 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:21:22.611925 | orchestrator | 2025-09-16 00:21:22.611936 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-16 00:21:22.611947 | orchestrator | Tuesday 16 September 2025 00:21:21 +0000 (0:00:00.067) 0:00:07.012 ***** 2025-09-16 00:21:22.611957 | orchestrator | changed: [testbed-manager] 2025-09-16 00:21:22.611968 | orchestrator | 2025-09-16 00:21:22.611979 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:21:22.611992 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:21:22.612005 | orchestrator | 2025-09-16 00:21:22.612017 | orchestrator | 2025-09-16 00:21:22.612029 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:21:22.612041 | orchestrator | Tuesday 16 September 2025 00:21:22 +0000 (0:00:00.592) 0:00:07.604 ***** 2025-09-16 00:21:22.612053 | orchestrator | =============================================================================== 2025-09-16 00:21:22.612066 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.69s 2025-09-16 00:21:22.612078 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2025-09-16 00:21:22.612090 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.59s 2025-09-16 00:21:22.612102 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.51s 2025-09-16 00:21:22.612114 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-09-16 00:21:22.880765 | orchestrator | + osism apply known-hosts 2025-09-16 00:21:34.910667 | orchestrator | 2025-09-16 00:21:34 | INFO  | Task 69e72071-03a6-4257-befc-84962f881c7d (known-hosts) was prepared for execution. 2025-09-16 00:21:34.910827 | orchestrator | 2025-09-16 00:21:34 | INFO  | It takes a moment until task 69e72071-03a6-4257-befc-84962f881c7d (known-hosts) has been started and output is visible here. 2025-09-16 00:21:50.374108 | orchestrator | 2025-09-16 00:21:50.374235 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-16 00:21:50.374262 | orchestrator | 2025-09-16 00:21:50.374281 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-16 00:21:50.374301 | orchestrator | Tuesday 16 September 2025 00:21:38 +0000 (0:00:00.122) 0:00:00.122 ***** 2025-09-16 00:21:50.374320 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-16 00:21:50.374340 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-16 00:21:50.374360 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-16 00:21:50.374379 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-16 00:21:50.374390 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-16 00:21:50.374401 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-16 00:21:50.374412 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-16 00:21:50.374423 | orchestrator | 2025-09-16 00:21:50.374435 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-16 00:21:50.374447 | orchestrator | Tuesday 16 September 2025 00:21:44 +0000 (0:00:05.687) 0:00:05.810 ***** 2025-09-16 00:21:50.374484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-16 00:21:50.374497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-16 00:21:50.374508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-16 00:21:50.374519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-16 00:21:50.374530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-16 00:21:50.374551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-16 00:21:50.374563 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-16 00:21:50.374575 | orchestrator | 2025-09-16 00:21:50.374588 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-16 00:21:50.374600 | orchestrator | Tuesday 16 September 2025 00:21:44 +0000 (0:00:00.152) 0:00:05.963 ***** 2025-09-16 00:21:50.374616 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5C5K4xVvmHZvAIKbt4U7Og+6kM9fWD/Ez8gPeyVViivFZK35RKH1KTW4J6c6WJoR/wyQSrZDo/4+jxuQxKmErOIsUbDzNNS41rxA0IosgTYDe/bt6CjAR3aSWTgDKtScRMJtikryXcns+MckCC5M6gvK/s4cGeWisWe2ubmKeMpt/KF+d82Eiq8bHJNoe7HyUZhtVh5IHFPI0kZ5Tbt5fYkv9h+t3X88v93/QKZldxSg5CAsjATukXVG1vmd2Gq6nlUXfDxvVT1opa0hCchudw2OE9jiKK5Rt3cc7wHvJ8mevxtex0JmfxrPLzeq7oAbH8pFjZuo3bYLcthoia+uP7HXLZSoAXA3mBOlMQvbXWwFhuk4hRV8TBtuucLwdXMKirHJLNOwclAd8rTiadB7IHCfxt8lD81GZIjx21gL3vWHAVvf+MKUo1gUA5AegeseZ79u/b6qv6lVYtaouI6bSjIW6A4I54Bq1G2rOsVKBMOnsWRT+92URf17S0lEsnt8=) 2025-09-16 00:21:50.374632 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBARblW+LgAM1gGuwgpER4BDg3+9Yajr7v/jjvuTnODn9Tq2jCQkdcg3eGvJLUEV1U4QUzyG4Caz2KYAxV/Vc4ww=) 2025-09-16 00:21:50.374647 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH69monXcfSwG88i3TZwCe/c+Kt8nMjDYihz7JaB7iRt) 2025-09-16 00:21:50.374661 | orchestrator | 2025-09-16 00:21:50.374674 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-16 00:21:50.374687 | orchestrator | Tuesday 16 September 2025 00:21:45 +0000 (0:00:01.155) 0:00:07.119 ***** 2025-09-16 00:21:50.374743 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0O/wOJ3SqBv38Y9/WhlaPYGS9gloH9eRnJwS0l66O9Le3gHWLdHdbx16V59XwT1klmkV9OBOPBPhoexjxReuvsb3+5RLCfo6ObiXPhJM3Fmn7BgnXmi3tsg5V1RnarrFUu0JPygAuR2hMrf8r2B8HqJ3Sn8mkxb0GWBtfYmW9I4D+HUJ3VM7oBJnlAjbpaITh0zakYlOzL67YicxPmHZ8F/G3WtStHC7agh4+MStWCNoUXSdx7xOqFd9IDNuEmCUrDqkAkaDJhSxg+0+c6o3v8c5KgNJAzPg3GXc59ImvVXZ6bnn0+SCSEZzczJ2unHNpw+wa7vTa36lbn84P5SeEG/zy2no1NCYOB/ZLgu+VtVzFtsJNpg1Ref8ylpeJ+4hNny32g/0yHyIWg1gpOmpm2Pk3N44X6hfR+iU0NJLPdbIrlK1fLfx4y6/BQ9G9QKCyqwCMblarE/Fp5BcqFsXIKNJc70YP4ZtVePqS1zxmb7ta6mlfbS9EB2sFGQqWeQs=) 2025-09-16 00:21:50.374758 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNV6vZ/dFIrtN9nLsbUcPoTBy9eyaiuh/jtG1NyKD0wWqZ8go/wL1+UKGWWtHMe5pd6EDSe6rkYA5g4UGluzCXE=) 2025-09-16 00:21:50.374770 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDakPp24xeIBZ3VpfmGEUsf8MHFdlqltLOCmxnXS8tX0) 2025-09-16 00:21:50.374792 | orchestrator | 2025-09-16 00:21:50.374804 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-16 00:21:50.374817 | orchestrator | Tuesday 16 September 2025 00:21:46 +0000 (0:00:00.945) 0:00:08.064 ***** 2025-09-16 00:21:50.374830 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhFlh2WIR/C0Hq24j/Qijw/DV2qgLw4xYQ49yLEbad4v/xqelf8HONHQv/JWaPSMix2otaSZhQIy17gB83kFlTPKEoYa4nCnfsClhQuYyb1WkVHx6K3zwxkdiRUSgZ5rAz95V6FclZ09uEOpYhqjFnTOW7xDM4aX6bo64DMwoDGtRLqvR2B2f6sN/vC4IZiJ4vfI7uQYg+UU8VQBFZOOgYKKdimA/f76dLm+ub05qi9xOmrSBM2xWrsfIew6s9wBca+mG4aZdRbg/RHNzJl3GNGzFTHis8xyjy2FGUn9H0XayCe2jtMNxvGWwDGjl3iyV3mZE87IDi3e6Rg+Z4H0X2albVaYN48gKtXELCNAuXPsfj5QXtcgCioCSFth1jXiQEGO7jdhAvAnIJNfNKZ6wUmbqZa0ZA1KPfQi6SZt7yalCRLYqDeUjNAExYYknhrbU6jCo3+bYYGHxbQNDONEJqKFobF7KsecG5FHHN5vE7gzeJizSB2L1Rsa3KpL544nE=) 2025-09-16 00:21:50.374842 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICxfIDpMXSrasIOG2ZyJEndJSnD1Amq+xP4hvEN+BSCo) 2025-09-16 00:21:50.374855 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN+DvABHIMJQWlCemLCvqu2tUKRHt0nNhxC1L4GGHl55Eq4GKYsufu9iiZLyLGoG4xa6Bcn/e8cRI77E0iqQ4yw=) 2025-09-16 00:21:50.374866 | orchestrator | 2025-09-16 00:21:50.374879 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-16 00:21:50.374891 | orchestrator | Tuesday 16 September 2025 00:21:47 +0000 (0:00:00.992) 0:00:09.057 ***** 2025-09-16 00:21:50.374966 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCL/ziXSXsdavKrB1UXMaPHj6Nm4CYVNvQnzTZiKNPtUSXLKt343Cy0fZW9qgQPmV3NZ4XqOzpztyAXQXW9uHTG+OdPj8ixSFqaALKOtLDXtLMMmMxKbFJbVMPQErBfgAcnRHh+9WByRhAYy3pXD7sqIWbO3eTxRCguptnrTyAZFm9H6Yu5QQRIe6bdv2kup6voS/difjKIjWK5N2tZ6++4H0PVoPBpJ8S4uFWxSONVAwmBYRyxUcjDvx0K1/y/IP5JE5I34f6B+rMso+GtkcMNmgVTzBQfLJU/yYq2XN6viYwJsMkaF4mE8X9IvLRHuOcKEiYt1GlzZubHyYtFW4XRMW15gZsjioBb8D6W0jL71Llq7ba9+Gj1bR0lgpeyfHn2crwQG+bxJXua0hm1eVyNY/y9CfOSRPf6spFoTJd1b8VJleVXXdcrAnY7b2mbENt6fKSvW2RJ8sEnPOG3wW+Ggv1kaGja/qveR0Beu66HsKjqvERXjJof8jOkLmTfIM=) 2025-09-16 00:21:50.374979 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB4IVubBgzdm8ECDQPVHGcut4Zd1sk3zKzfrvfqF4r3FQ+ss74UBBCQYS856TwKVTvHYW5jTEAE3hNeVuaZPXWM=) 2025-09-16 00:21:50.374990 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxgDxbjrNVCTDBwBqgnAspqkF90EI4pDCShe5mAajxW) 2025-09-16 00:21:50.375001 | orchestrator | 2025-09-16 00:21:50.375012 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-16 00:21:50.375022 | orchestrator | Tuesday 16 September 2025 00:21:48 +0000 (0:00:00.969) 0:00:10.027 ***** 2025-09-16 00:21:50.375033 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeAlXBYVLKm9bgoJegSIb/xAlviKu4hrUFGSxSy4UnwnTfyP+eEa8BF+XQh6p2zGi8IbRpW82re7Y0te6u7vIEm0/UXDnjTiBwNmoglGYzriBDxiri+HCGXc4D8v/CQsRyUTS/FvJDSAvFu+fQzaSiSqubna2xewcAoeRvuyrz16kLGjUtAcFQbb9WugDNSgXKA9y0WwE6dhkiRUnZgiGHmlj6gjKVz06gvRlT16tscsDjCEq4NtoZB8/m7bj8VGwJQAclSpN+V5we4Shz5KGpP1uyC9OIElskQSBqUSICSTKBYEq4vAz42kDTXpr7Hjmo4ZPh97kEuL+1o8IGq4n3dDBbVIvSAyC9VgNY5pcFvpHeMfkUrrfprHQNCkKUORcP0WhC4IpW0PBiQfWMxjxUc+dcDP+J/iAq/Tjo8ywJKT0Jz3N2nWM94HvscRmMy83JDuZ4gizlxKN5PgMGWOUsOBEYTxqMYpwqLWtNvSBJbRBzEpD3OgbcA5woIqHk4tU=) 2025-09-16 00:21:50.375045 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDaEXGSB+VrdsiUQk0WgYi6hIWIOfpVxsbxMazNz9EmndIgdDo3SQKVkSqmsAszPHXLcLqdLp7bO7z6UoTJ/8vo=) 2025-09-16 00:21:50.375056 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOdCxEEvFklPgF6Ismibh5YsViCu4Xv3B7TvYEQzMPHJ) 2025-09-16 00:21:50.375074 | orchestrator | 2025-09-16 00:21:50.375085 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-16 00:21:50.375096 | orchestrator | Tuesday 16 September 2025 00:21:49 +0000 (0:00:01.015) 0:00:11.042 ***** 2025-09-16 00:21:50.375116 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1ZyfWpQVm83waYP0nGAEUUsr6Zozxz/9ym8N6VdR1aIb4N2kkc0uWeneFHgQcxOFAARrZIyawkzpu3wT/ERmjC6JGs7TxOZgpArI9oorllSUiz3OaPwH3Zc0f23XT+RiD+WFpIbGMBb/PaVOCNUSOhHzJI1ToLzFh67jtKmV8juqiokculwF8VjUpYcOtG/cGkxaqWEO22kDmBjNxL8O1rBDOcdJPK1fHA3G1kp6/vRYw7YviF3LiU0UU8MHcMw/W5fhNET5pGpugRC1dRbouMg+lpQmdFNchNnd7E0LHVWPmozG5bdgfgJwrRsbWaec178YawvCjhSYcvyVFtQ7Pbzvv6RS5QjW/oh+2tjzGNMfku/KLZZu/hH9IQofXXD5UtSmRVuFXDw6B9lXoNBaolydY2WCybkfUKVynpO+mDL2OLSY8kiBQMrtwyzxZh1Ujjkse+n0VRHm73WbpZ0KX3tLg5JDm+MgO6v/0C97ntmPNSzERuGmOuOHPYSMR3r0=) 2025-09-16 00:22:01.201582 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEE/xCAl01BgipBByzw78p13SO5X6+9MeD37B8/PUX271GHVcNXbvpd/KiNHW1twVb7G/fIMzMSiB5jG1CtLJU0=) 2025-09-16 00:22:01.201703 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICMnZWjcyTJmsEKBwvBhMkArstwa0JzxONo7AH95dQ+c) 2025-09-16 00:22:01.201766 | orchestrator | 2025-09-16 00:22:01.201793 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-16 00:22:01.201805 | orchestrator | Tuesday 16 September 2025 00:21:50 +0000 (0:00:01.028) 0:00:12.070 ***** 2025-09-16 00:22:01.201814 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF+qW0zuPEMTtuN/YliTZadXHe2sv90/yt60eL0CHkPfuMhXqKeIb+myibKFoJD2lPo0jRkFTwwODm3P0TIslPk=) 2025-09-16 00:22:01.201823 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILxpTUewZRzVcKb3+J2XqQmoOFxPclHLf3jzdryCQVNM) 2025-09-16 00:22:01.201835 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGgK6o8e6v5cP3wA7MhGkA2mquG/0QPlaepEt5XISK6h/8iP9ZHoOwSf6mdlsd/wZwkOaINo3BH+FUTeab+bApbooqczqkMpL7lTeRESHodmNA7AP7VgNFLMcWbpCgPcpEOJxQF2LyvHps1Xwt9vfDpiUp7kYTTnlI/OJgPNoN6Uuk1xBX/l/2ecp/nLGlF4PchgsmKSxdczQUAbHBbsdgjVIkZ0v1ZiOhCDeyWMbpIN9Won5nE31G3WiFJLmvTyLs3mkDNlIWkwXsEBpJ/iU+m4BY5QSEPrMRZ6nBTynXkSuDYVg/dChFSzw95ki7m9VAjD95rlk2+ylWkEzw1YrAC25H5UT5AbYFKS3nedNqUTMe647dWX94/Vd7xwtwdKzKDKdH8oc2XUMO8QIG0LWukYaYZG3RMOyPCchuYFFVDQyTnwQ88UTGBxG2/QHPAkP/h+g8NkWedjf3Cb938IdteXdvug+iAk4hcLv/Y/R5VTyav4+r3NV/HgDrZahAJG0=) 2025-09-16 00:22:01.201846 | orchestrator | 2025-09-16 00:22:01.201855 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-16 00:22:01.201865 | orchestrator | Tuesday 16 September 2025 00:21:51 +0000 (0:00:01.105) 0:00:13.176 ***** 2025-09-16 00:22:01.201874 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-16 00:22:01.201884 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-16 00:22:01.201893 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-16 00:22:01.201901 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-16 00:22:01.201910 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-16 00:22:01.201918 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-16 00:22:01.201927 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-16 00:22:01.201935 | orchestrator | 2025-09-16 00:22:01.201944 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-16 00:22:01.201970 | orchestrator | Tuesday 16 September 2025 00:21:56 +0000 (0:00:05.185) 0:00:18.362 ***** 2025-09-16 00:22:01.201980 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-16 00:22:01.201991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-16 00:22:01.202076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-16 00:22:01.202097 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-16 00:22:01.202113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-16 00:22:01.202130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-16 00:22:01.202142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-16 00:22:01.202152 | orchestrator | 2025-09-16 00:22:01.202162 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-16 00:22:01.202172 | orchestrator | Tuesday 16 September 2025 00:21:56 +0000 (0:00:00.173) 0:00:18.535 ***** 2025-09-16 00:22:01.202183 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH69monXcfSwG88i3TZwCe/c+Kt8nMjDYihz7JaB7iRt) 2025-09-16 00:22:01.202218 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5C5K4xVvmHZvAIKbt4U7Og+6kM9fWD/Ez8gPeyVViivFZK35RKH1KTW4J6c6WJoR/wyQSrZDo/4+jxuQxKmErOIsUbDzNNS41rxA0IosgTYDe/bt6CjAR3aSWTgDKtScRMJtikryXcns+MckCC5M6gvK/s4cGeWisWe2ubmKeMpt/KF+d82Eiq8bHJNoe7HyUZhtVh5IHFPI0kZ5Tbt5fYkv9h+t3X88v93/QKZldxSg5CAsjATukXVG1vmd2Gq6nlUXfDxvVT1opa0hCchudw2OE9jiKK5Rt3cc7wHvJ8mevxtex0JmfxrPLzeq7oAbH8pFjZuo3bYLcthoia+uP7HXLZSoAXA3mBOlMQvbXWwFhuk4hRV8TBtuucLwdXMKirHJLNOwclAd8rTiadB7IHCfxt8lD81GZIjx21gL3vWHAVvf+MKUo1gUA5AegeseZ79u/b6qv6lVYtaouI6bSjIW6A4I54Bq1G2rOsVKBMOnsWRT+92URf17S0lEsnt8=) 2025-09-16 00:22:01.202230 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBARblW+LgAM1gGuwgpER4BDg3+9Yajr7v/jjvuTnODn9Tq2jCQkdcg3eGvJLUEV1U4QUzyG4Caz2KYAxV/Vc4ww=) 2025-09-16 00:22:01.202241 | orchestrator | 2025-09-16 00:22:01.202251 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-16 00:22:01.202260 | orchestrator | Tuesday 16 September 2025 00:21:57 +0000 (0:00:01.100) 0:00:19.635 ***** 2025-09-16 00:22:01.202269 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0O/wOJ3SqBv38Y9/WhlaPYGS9gloH9eRnJwS0l66O9Le3gHWLdHdbx16V59XwT1klmkV9OBOPBPhoexjxReuvsb3+5RLCfo6ObiXPhJM3Fmn7BgnXmi3tsg5V1RnarrFUu0JPygAuR2hMrf8r2B8HqJ3Sn8mkxb0GWBtfYmW9I4D+HUJ3VM7oBJnlAjbpaITh0zakYlOzL67YicxPmHZ8F/G3WtStHC7agh4+MStWCNoUXSdx7xOqFd9IDNuEmCUrDqkAkaDJhSxg+0+c6o3v8c5KgNJAzPg3GXc59ImvVXZ6bnn0+SCSEZzczJ2unHNpw+wa7vTa36lbn84P5SeEG/zy2no1NCYOB/ZLgu+VtVzFtsJNpg1Ref8ylpeJ+4hNny32g/0yHyIWg1gpOmpm2Pk3N44X6hfR+iU0NJLPdbIrlK1fLfx4y6/BQ9G9QKCyqwCMblarE/Fp5BcqFsXIKNJc70YP4ZtVePqS1zxmb7ta6mlfbS9EB2sFGQqWeQs=) 2025-09-16 00:22:01.202278 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNV6vZ/dFIrtN9nLsbUcPoTBy9eyaiuh/jtG1NyKD0wWqZ8go/wL1+UKGWWtHMe5pd6EDSe6rkYA5g4UGluzCXE=) 2025-09-16 00:22:01.202287 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDakPp24xeIBZ3VpfmGEUsf8MHFdlqltLOCmxnXS8tX0) 2025-09-16 00:22:01.202296 | orchestrator | 2025-09-16 00:22:01.202305 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-16 00:22:01.202313 | orchestrator | Tuesday 16 September 2025 00:21:58 +0000 (0:00:01.061) 0:00:20.697 ***** 2025-09-16 00:22:01.202332 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhFlh2WIR/C0Hq24j/Qijw/DV2qgLw4xYQ49yLEbad4v/xqelf8HONHQv/JWaPSMix2otaSZhQIy17gB83kFlTPKEoYa4nCnfsClhQuYyb1WkVHx6K3zwxkdiRUSgZ5rAz95V6FclZ09uEOpYhqjFnTOW7xDM4aX6bo64DMwoDGtRLqvR2B2f6sN/vC4IZiJ4vfI7uQYg+UU8VQBFZOOgYKKdimA/f76dLm+ub05qi9xOmrSBM2xWrsfIew6s9wBca+mG4aZdRbg/RHNzJl3GNGzFTHis8xyjy2FGUn9H0XayCe2jtMNxvGWwDGjl3iyV3mZE87IDi3e6Rg+Z4H0X2albVaYN48gKtXELCNAuXPsfj5QXtcgCioCSFth1jXiQEGO7jdhAvAnIJNfNKZ6wUmbqZa0ZA1KPfQi6SZt7yalCRLYqDeUjNAExYYknhrbU6jCo3+bYYGHxbQNDONEJqKFobF7KsecG5FHHN5vE7gzeJizSB2L1Rsa3KpL544nE=) 2025-09-16 00:22:01.202341 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN+DvABHIMJQWlCemLCvqu2tUKRHt0nNhxC1L4GGHl55Eq4GKYsufu9iiZLyLGoG4xa6Bcn/e8cRI77E0iqQ4yw=) 2025-09-16 00:22:01.202351 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICxfIDpMXSrasIOG2ZyJEndJSnD1Amq+xP4hvEN+BSCo) 2025-09-16 00:22:01.202359 | orchestrator | 2025-09-16 00:22:01.202368 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-16 00:22:01.202376 | orchestrator | Tuesday 16 September 2025 00:22:00 +0000 (0:00:01.060) 0:00:21.757 ***** 2025-09-16 00:22:01.202390 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB4IVubBgzdm8ECDQPVHGcut4Zd1sk3zKzfrvfqF4r3FQ+ss74UBBCQYS856TwKVTvHYW5jTEAE3hNeVuaZPXWM=) 2025-09-16 00:22:01.202400 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCL/ziXSXsdavKrB1UXMaPHj6Nm4CYVNvQnzTZiKNPtUSXLKt343Cy0fZW9qgQPmV3NZ4XqOzpztyAXQXW9uHTG+OdPj8ixSFqaALKOtLDXtLMMmMxKbFJbVMPQErBfgAcnRHh+9WByRhAYy3pXD7sqIWbO3eTxRCguptnrTyAZFm9H6Yu5QQRIe6bdv2kup6voS/difjKIjWK5N2tZ6++4H0PVoPBpJ8S4uFWxSONVAwmBYRyxUcjDvx0K1/y/IP5JE5I34f6B+rMso+GtkcMNmgVTzBQfLJU/yYq2XN6viYwJsMkaF4mE8X9IvLRHuOcKEiYt1GlzZubHyYtFW4XRMW15gZsjioBb8D6W0jL71Llq7ba9+Gj1bR0lgpeyfHn2crwQG+bxJXua0hm1eVyNY/y9CfOSRPf6spFoTJd1b8VJleVXXdcrAnY7b2mbENt6fKSvW2RJ8sEnPOG3wW+Ggv1kaGja/qveR0Beu66HsKjqvERXjJof8jOkLmTfIM=) 2025-09-16 00:22:01.202418 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxgDxbjrNVCTDBwBqgnAspqkF90EI4pDCShe5mAajxW) 2025-09-16 00:22:05.291872 | orchestrator | 2025-09-16 00:22:05.291960 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-16 00:22:05.291971 | orchestrator | Tuesday 16 September 2025 00:22:01 +0000 (0:00:01.134) 0:00:22.892 ***** 2025-09-16 00:22:05.291982 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeAlXBYVLKm9bgoJegSIb/xAlviKu4hrUFGSxSy4UnwnTfyP+eEa8BF+XQh6p2zGi8IbRpW82re7Y0te6u7vIEm0/UXDnjTiBwNmoglGYzriBDxiri+HCGXc4D8v/CQsRyUTS/FvJDSAvFu+fQzaSiSqubna2xewcAoeRvuyrz16kLGjUtAcFQbb9WugDNSgXKA9y0WwE6dhkiRUnZgiGHmlj6gjKVz06gvRlT16tscsDjCEq4NtoZB8/m7bj8VGwJQAclSpN+V5we4Shz5KGpP1uyC9OIElskQSBqUSICSTKBYEq4vAz42kDTXpr7Hjmo4ZPh97kEuL+1o8IGq4n3dDBbVIvSAyC9VgNY5pcFvpHeMfkUrrfprHQNCkKUORcP0WhC4IpW0PBiQfWMxjxUc+dcDP+J/iAq/Tjo8ywJKT0Jz3N2nWM94HvscRmMy83JDuZ4gizlxKN5PgMGWOUsOBEYTxqMYpwqLWtNvSBJbRBzEpD3OgbcA5woIqHk4tU=) 2025-09-16 00:22:05.291993 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDaEXGSB+VrdsiUQk0WgYi6hIWIOfpVxsbxMazNz9EmndIgdDo3SQKVkSqmsAszPHXLcLqdLp7bO7z6UoTJ/8vo=) 2025-09-16 00:22:05.292003 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOdCxEEvFklPgF6Ismibh5YsViCu4Xv3B7TvYEQzMPHJ) 2025-09-16 00:22:05.292012 | orchestrator | 2025-09-16 00:22:05.292019 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-16 00:22:05.292026 | orchestrator | Tuesday 16 September 2025 00:22:02 +0000 (0:00:01.050) 0:00:23.943 ***** 2025-09-16 00:22:05.292034 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1ZyfWpQVm83waYP0nGAEUUsr6Zozxz/9ym8N6VdR1aIb4N2kkc0uWeneFHgQcxOFAARrZIyawkzpu3wT/ERmjC6JGs7TxOZgpArI9oorllSUiz3OaPwH3Zc0f23XT+RiD+WFpIbGMBb/PaVOCNUSOhHzJI1ToLzFh67jtKmV8juqiokculwF8VjUpYcOtG/cGkxaqWEO22kDmBjNxL8O1rBDOcdJPK1fHA3G1kp6/vRYw7YviF3LiU0UU8MHcMw/W5fhNET5pGpugRC1dRbouMg+lpQmdFNchNnd7E0LHVWPmozG5bdgfgJwrRsbWaec178YawvCjhSYcvyVFtQ7Pbzvv6RS5QjW/oh+2tjzGNMfku/KLZZu/hH9IQofXXD5UtSmRVuFXDw6B9lXoNBaolydY2WCybkfUKVynpO+mDL2OLSY8kiBQMrtwyzxZh1Ujjkse+n0VRHm73WbpZ0KX3tLg5JDm+MgO6v/0C97ntmPNSzERuGmOuOHPYSMR3r0=) 2025-09-16 00:22:05.292060 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEE/xCAl01BgipBByzw78p13SO5X6+9MeD37B8/PUX271GHVcNXbvpd/KiNHW1twVb7G/fIMzMSiB5jG1CtLJU0=) 2025-09-16 00:22:05.292068 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICMnZWjcyTJmsEKBwvBhMkArstwa0JzxONo7AH95dQ+c) 2025-09-16 00:22:05.292075 | orchestrator | 2025-09-16 00:22:05.292083 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-16 00:22:05.292090 | orchestrator | Tuesday 16 September 2025 00:22:03 +0000 (0:00:01.042) 0:00:24.985 ***** 2025-09-16 00:22:05.292097 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF+qW0zuPEMTtuN/YliTZadXHe2sv90/yt60eL0CHkPfuMhXqKeIb+myibKFoJD2lPo0jRkFTwwODm3P0TIslPk=) 2025-09-16 00:22:05.292105 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGgK6o8e6v5cP3wA7MhGkA2mquG/0QPlaepEt5XISK6h/8iP9ZHoOwSf6mdlsd/wZwkOaINo3BH+FUTeab+bApbooqczqkMpL7lTeRESHodmNA7AP7VgNFLMcWbpCgPcpEOJxQF2LyvHps1Xwt9vfDpiUp7kYTTnlI/OJgPNoN6Uuk1xBX/l/2ecp/nLGlF4PchgsmKSxdczQUAbHBbsdgjVIkZ0v1ZiOhCDeyWMbpIN9Won5nE31G3WiFJLmvTyLs3mkDNlIWkwXsEBpJ/iU+m4BY5QSEPrMRZ6nBTynXkSuDYVg/dChFSzw95ki7m9VAjD95rlk2+ylWkEzw1YrAC25H5UT5AbYFKS3nedNqUTMe647dWX94/Vd7xwtwdKzKDKdH8oc2XUMO8QIG0LWukYaYZG3RMOyPCchuYFFVDQyTnwQ88UTGBxG2/QHPAkP/h+g8NkWedjf3Cb938IdteXdvug+iAk4hcLv/Y/R5VTyav4+r3NV/HgDrZahAJG0=) 2025-09-16 00:22:05.292112 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILxpTUewZRzVcKb3+J2XqQmoOFxPclHLf3jzdryCQVNM) 2025-09-16 00:22:05.292119 | orchestrator | 2025-09-16 00:22:05.292127 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-16 00:22:05.292134 | orchestrator | Tuesday 16 September 2025 00:22:04 +0000 (0:00:01.037) 0:00:26.023 ***** 2025-09-16 00:22:05.292141 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-16 00:22:05.292149 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-16 00:22:05.292156 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-16 00:22:05.292163 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-16 00:22:05.292170 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-16 00:22:05.292177 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-16 00:22:05.292184 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-16 00:22:05.292191 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:22:05.292199 | orchestrator | 2025-09-16 00:22:05.292220 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-16 00:22:05.292227 | orchestrator | Tuesday 16 September 2025 00:22:04 +0000 (0:00:00.155) 0:00:26.179 ***** 2025-09-16 00:22:05.292234 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:22:05.292241 | orchestrator | 2025-09-16 00:22:05.292248 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-16 00:22:05.292256 | orchestrator | Tuesday 16 September 2025 00:22:04 +0000 (0:00:00.070) 0:00:26.250 ***** 2025-09-16 00:22:05.292263 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:22:05.292270 | orchestrator | 2025-09-16 00:22:05.292277 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-16 00:22:05.292284 | orchestrator | Tuesday 16 September 2025 00:22:04 +0000 (0:00:00.047) 0:00:26.298 ***** 2025-09-16 00:22:05.292296 | orchestrator | changed: [testbed-manager] 2025-09-16 00:22:05.292303 | orchestrator | 2025-09-16 00:22:05.292310 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:22:05.292317 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-16 00:22:05.292326 | orchestrator | 2025-09-16 00:22:05.292333 | orchestrator | 2025-09-16 00:22:05.292354 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:22:05.292362 | orchestrator | Tuesday 16 September 2025 00:22:05 +0000 (0:00:00.475) 0:00:26.773 ***** 2025-09-16 00:22:05.292369 | orchestrator | =============================================================================== 2025-09-16 00:22:05.292376 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.69s 2025-09-16 00:22:05.292383 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.19s 2025-09-16 00:22:05.292391 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-09-16 00:22:05.292398 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-09-16 00:22:05.292405 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-16 00:22:05.292412 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-16 00:22:05.292420 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-16 00:22:05.292428 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-16 00:22:05.292436 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-16 00:22:05.292444 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-16 00:22:05.292452 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-16 00:22:05.292461 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-16 00:22:05.292469 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-16 00:22:05.292477 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-09-16 00:22:05.292485 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-09-16 00:22:05.292493 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2025-09-16 00:22:05.292502 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.48s 2025-09-16 00:22:05.292510 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-09-16 00:22:05.292518 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-09-16 00:22:05.292530 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2025-09-16 00:22:05.543993 | orchestrator | + osism apply squid 2025-09-16 00:22:17.550699 | orchestrator | 2025-09-16 00:22:17 | INFO  | Task ff0119dd-e096-487b-b8c3-c825f98f663b (squid) was prepared for execution. 2025-09-16 00:22:17.550863 | orchestrator | 2025-09-16 00:22:17 | INFO  | It takes a moment until task ff0119dd-e096-487b-b8c3-c825f98f663b (squid) has been started and output is visible here. 2025-09-16 00:24:10.927135 | orchestrator | 2025-09-16 00:24:10.927245 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-16 00:24:10.927261 | orchestrator | 2025-09-16 00:24:10.927272 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-16 00:24:10.927283 | orchestrator | Tuesday 16 September 2025 00:22:21 +0000 (0:00:00.162) 0:00:00.162 ***** 2025-09-16 00:24:10.927293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-16 00:24:10.927304 | orchestrator | 2025-09-16 00:24:10.927314 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-16 00:24:10.927350 | orchestrator | Tuesday 16 September 2025 00:22:21 +0000 (0:00:00.087) 0:00:00.250 ***** 2025-09-16 00:24:10.927361 | orchestrator | ok: [testbed-manager] 2025-09-16 00:24:10.927372 | orchestrator | 2025-09-16 00:24:10.927382 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-16 00:24:10.927391 | orchestrator | Tuesday 16 September 2025 00:22:22 +0000 (0:00:01.421) 0:00:01.672 ***** 2025-09-16 00:24:10.927401 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-16 00:24:10.927410 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-16 00:24:10.927420 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-16 00:24:10.927430 | orchestrator | 2025-09-16 00:24:10.927440 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-16 00:24:10.927449 | orchestrator | Tuesday 16 September 2025 00:22:24 +0000 (0:00:01.153) 0:00:02.826 ***** 2025-09-16 00:24:10.927459 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-16 00:24:10.927468 | orchestrator | 2025-09-16 00:24:10.927477 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-16 00:24:10.927487 | orchestrator | Tuesday 16 September 2025 00:22:25 +0000 (0:00:01.081) 0:00:03.907 ***** 2025-09-16 00:24:10.927496 | orchestrator | ok: [testbed-manager] 2025-09-16 00:24:10.927506 | orchestrator | 2025-09-16 00:24:10.927515 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-16 00:24:10.927524 | orchestrator | Tuesday 16 September 2025 00:22:25 +0000 (0:00:00.373) 0:00:04.281 ***** 2025-09-16 00:24:10.927534 | orchestrator | changed: [testbed-manager] 2025-09-16 00:24:10.927543 | orchestrator | 2025-09-16 00:24:10.927553 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-16 00:24:10.927562 | orchestrator | Tuesday 16 September 2025 00:22:26 +0000 (0:00:00.900) 0:00:05.182 ***** 2025-09-16 00:24:10.927571 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-16 00:24:10.927581 | orchestrator | ok: [testbed-manager] 2025-09-16 00:24:10.927591 | orchestrator | 2025-09-16 00:24:10.927600 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-16 00:24:10.927610 | orchestrator | Tuesday 16 September 2025 00:22:57 +0000 (0:00:31.354) 0:00:36.536 ***** 2025-09-16 00:24:10.927619 | orchestrator | changed: [testbed-manager] 2025-09-16 00:24:10.927628 | orchestrator | 2025-09-16 00:24:10.927638 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-16 00:24:10.927647 | orchestrator | Tuesday 16 September 2025 00:23:09 +0000 (0:00:12.062) 0:00:48.599 ***** 2025-09-16 00:24:10.927658 | orchestrator | Pausing for 60 seconds 2025-09-16 00:24:10.927668 | orchestrator | changed: [testbed-manager] 2025-09-16 00:24:10.927678 | orchestrator | 2025-09-16 00:24:10.927716 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-16 00:24:10.927728 | orchestrator | Tuesday 16 September 2025 00:24:09 +0000 (0:01:00.081) 0:01:48.681 ***** 2025-09-16 00:24:10.927739 | orchestrator | ok: [testbed-manager] 2025-09-16 00:24:10.927749 | orchestrator | 2025-09-16 00:24:10.927760 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-16 00:24:10.927771 | orchestrator | Tuesday 16 September 2025 00:24:09 +0000 (0:00:00.072) 0:01:48.754 ***** 2025-09-16 00:24:10.927781 | orchestrator | changed: [testbed-manager] 2025-09-16 00:24:10.927793 | orchestrator | 2025-09-16 00:24:10.927803 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:24:10.927813 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:24:10.927823 | orchestrator | 2025-09-16 00:24:10.927832 | orchestrator | 2025-09-16 00:24:10.927841 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:24:10.927851 | orchestrator | Tuesday 16 September 2025 00:24:10 +0000 (0:00:00.664) 0:01:49.418 ***** 2025-09-16 00:24:10.927870 | orchestrator | =============================================================================== 2025-09-16 00:24:10.927880 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-09-16 00:24:10.927889 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.35s 2025-09-16 00:24:10.927898 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.06s 2025-09-16 00:24:10.927908 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.42s 2025-09-16 00:24:10.927917 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.15s 2025-09-16 00:24:10.927927 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.08s 2025-09-16 00:24:10.927936 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.90s 2025-09-16 00:24:10.927946 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2025-09-16 00:24:10.927955 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-09-16 00:24:10.927965 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-09-16 00:24:10.927974 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-09-16 00:24:11.230864 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-16 00:24:11.231173 | orchestrator | ++ semver latest 9.0.0 2025-09-16 00:24:11.281095 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-16 00:24:11.281147 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-16 00:24:11.282096 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-16 00:24:23.242377 | orchestrator | 2025-09-16 00:24:23 | INFO  | Task f289b859-4850-4341-b165-942e9bf95c80 (operator) was prepared for execution. 2025-09-16 00:24:23.242494 | orchestrator | 2025-09-16 00:24:23 | INFO  | It takes a moment until task f289b859-4850-4341-b165-942e9bf95c80 (operator) has been started and output is visible here. 2025-09-16 00:24:39.117568 | orchestrator | 2025-09-16 00:24:39.117725 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-16 00:24:39.117745 | orchestrator | 2025-09-16 00:24:39.117758 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-16 00:24:39.117769 | orchestrator | Tuesday 16 September 2025 00:24:26 +0000 (0:00:00.110) 0:00:00.110 ***** 2025-09-16 00:24:39.117780 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:24:39.117792 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:24:39.117803 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:24:39.117814 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:24:39.117825 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:24:39.117835 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:24:39.117846 | orchestrator | 2025-09-16 00:24:39.117857 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-16 00:24:39.117868 | orchestrator | Tuesday 16 September 2025 00:24:30 +0000 (0:00:03.566) 0:00:03.677 ***** 2025-09-16 00:24:39.117879 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:24:39.117890 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:24:39.117901 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:24:39.117912 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:24:39.117923 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:24:39.117933 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:24:39.117944 | orchestrator | 2025-09-16 00:24:39.117957 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-16 00:24:39.117976 | orchestrator | 2025-09-16 00:24:39.117993 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-16 00:24:39.118013 | orchestrator | Tuesday 16 September 2025 00:24:30 +0000 (0:00:00.681) 0:00:04.358 ***** 2025-09-16 00:24:39.118088 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:24:39.118099 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:24:39.118112 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:24:39.118124 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:24:39.118136 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:24:39.118147 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:24:39.118214 | orchestrator | 2025-09-16 00:24:39.118228 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-16 00:24:39.118240 | orchestrator | Tuesday 16 September 2025 00:24:31 +0000 (0:00:00.138) 0:00:04.497 ***** 2025-09-16 00:24:39.118252 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:24:39.118264 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:24:39.118276 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:24:39.118288 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:24:39.118300 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:24:39.118312 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:24:39.118324 | orchestrator | 2025-09-16 00:24:39.118336 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-16 00:24:39.118347 | orchestrator | Tuesday 16 September 2025 00:24:31 +0000 (0:00:00.123) 0:00:04.620 ***** 2025-09-16 00:24:39.118359 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:24:39.118372 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:24:39.118384 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:24:39.118396 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:24:39.118409 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:24:39.118421 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:24:39.118434 | orchestrator | 2025-09-16 00:24:39.118446 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-16 00:24:39.118457 | orchestrator | Tuesday 16 September 2025 00:24:31 +0000 (0:00:00.562) 0:00:05.182 ***** 2025-09-16 00:24:39.118468 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:24:39.118479 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:24:39.118490 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:24:39.118500 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:24:39.118511 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:24:39.118521 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:24:39.118532 | orchestrator | 2025-09-16 00:24:39.118543 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-16 00:24:39.118553 | orchestrator | Tuesday 16 September 2025 00:24:32 +0000 (0:00:00.841) 0:00:06.024 ***** 2025-09-16 00:24:39.118564 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-16 00:24:39.118575 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-16 00:24:39.118586 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-16 00:24:39.118597 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-16 00:24:39.118608 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-16 00:24:39.118618 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-16 00:24:39.118629 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-16 00:24:39.118639 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-16 00:24:39.118650 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-16 00:24:39.118661 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-16 00:24:39.118671 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-16 00:24:39.118703 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-16 00:24:39.118714 | orchestrator | 2025-09-16 00:24:39.118725 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-16 00:24:39.118736 | orchestrator | Tuesday 16 September 2025 00:24:34 +0000 (0:00:02.169) 0:00:08.194 ***** 2025-09-16 00:24:39.118747 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:24:39.118757 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:24:39.118768 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:24:39.118779 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:24:39.118789 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:24:39.118800 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:24:39.118811 | orchestrator | 2025-09-16 00:24:39.118821 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-16 00:24:39.118833 | orchestrator | Tuesday 16 September 2025 00:24:35 +0000 (0:00:01.209) 0:00:09.403 ***** 2025-09-16 00:24:39.118844 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-16 00:24:39.118865 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-16 00:24:39.118875 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-16 00:24:39.118886 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-16 00:24:39.118917 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-16 00:24:39.118929 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-16 00:24:39.118940 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-16 00:24:39.118950 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-16 00:24:39.118961 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-16 00:24:39.118972 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-16 00:24:39.118982 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-16 00:24:39.118993 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-16 00:24:39.119004 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-16 00:24:39.119014 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-16 00:24:39.119025 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-16 00:24:39.119035 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-16 00:24:39.119046 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-16 00:24:39.119057 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-16 00:24:39.119067 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-16 00:24:39.119078 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-16 00:24:39.119088 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-16 00:24:39.119099 | orchestrator | 2025-09-16 00:24:39.119110 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-16 00:24:39.119121 | orchestrator | Tuesday 16 September 2025 00:24:37 +0000 (0:00:01.323) 0:00:10.726 ***** 2025-09-16 00:24:39.119132 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:24:39.119143 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:24:39.119153 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:24:39.119164 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:24:39.119175 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:24:39.119185 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:24:39.119196 | orchestrator | 2025-09-16 00:24:39.119206 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-16 00:24:39.119217 | orchestrator | Tuesday 16 September 2025 00:24:37 +0000 (0:00:00.138) 0:00:10.865 ***** 2025-09-16 00:24:39.119228 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:24:39.119238 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:24:39.119249 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:24:39.119259 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:24:39.119270 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:24:39.119280 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:24:39.119291 | orchestrator | 2025-09-16 00:24:39.119302 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-16 00:24:39.119313 | orchestrator | Tuesday 16 September 2025 00:24:37 +0000 (0:00:00.523) 0:00:11.389 ***** 2025-09-16 00:24:39.119324 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:24:39.119334 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:24:39.119345 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:24:39.119355 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:24:39.119366 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:24:39.119377 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:24:39.119387 | orchestrator | 2025-09-16 00:24:39.119405 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-16 00:24:39.119416 | orchestrator | Tuesday 16 September 2025 00:24:38 +0000 (0:00:00.129) 0:00:11.518 ***** 2025-09-16 00:24:39.119427 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-16 00:24:39.119441 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:24:39.119452 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-16 00:24:39.119463 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:24:39.119474 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-16 00:24:39.119484 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:24:39.119495 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-16 00:24:39.119506 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-16 00:24:39.119517 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:24:39.119527 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:24:39.119538 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-16 00:24:39.119548 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:24:39.119559 | orchestrator | 2025-09-16 00:24:39.119586 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-16 00:24:39.119598 | orchestrator | Tuesday 16 September 2025 00:24:38 +0000 (0:00:00.664) 0:00:12.182 ***** 2025-09-16 00:24:39.119609 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:24:39.119620 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:24:39.119630 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:24:39.119646 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:24:39.119657 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:24:39.119668 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:24:39.119678 | orchestrator | 2025-09-16 00:24:39.119729 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-16 00:24:39.119741 | orchestrator | Tuesday 16 September 2025 00:24:38 +0000 (0:00:00.140) 0:00:12.323 ***** 2025-09-16 00:24:39.119751 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:24:39.119762 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:24:39.119773 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:24:39.119784 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:24:39.119794 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:24:39.119805 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:24:39.119815 | orchestrator | 2025-09-16 00:24:39.119826 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-16 00:24:39.119837 | orchestrator | Tuesday 16 September 2025 00:24:39 +0000 (0:00:00.114) 0:00:12.438 ***** 2025-09-16 00:24:39.119848 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:24:39.119859 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:24:39.119869 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:24:39.119880 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:24:39.119899 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:24:40.115359 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:24:40.115454 | orchestrator | 2025-09-16 00:24:40.115469 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-16 00:24:40.115482 | orchestrator | Tuesday 16 September 2025 00:24:39 +0000 (0:00:00.104) 0:00:12.543 ***** 2025-09-16 00:24:40.115493 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:24:40.115504 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:24:40.115515 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:24:40.115525 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:24:40.115536 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:24:40.115547 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:24:40.115558 | orchestrator | 2025-09-16 00:24:40.115569 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-16 00:24:40.115580 | orchestrator | Tuesday 16 September 2025 00:24:39 +0000 (0:00:00.635) 0:00:13.178 ***** 2025-09-16 00:24:40.115590 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:24:40.115601 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:24:40.115612 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:24:40.115649 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:24:40.115660 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:24:40.115670 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:24:40.115723 | orchestrator | 2025-09-16 00:24:40.115736 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:24:40.115749 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 00:24:40.115761 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 00:24:40.115772 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 00:24:40.115783 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 00:24:40.115794 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 00:24:40.115805 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 00:24:40.115816 | orchestrator | 2025-09-16 00:24:40.115826 | orchestrator | 2025-09-16 00:24:40.115837 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:24:40.115848 | orchestrator | Tuesday 16 September 2025 00:24:39 +0000 (0:00:00.202) 0:00:13.381 ***** 2025-09-16 00:24:40.115859 | orchestrator | =============================================================================== 2025-09-16 00:24:40.115870 | orchestrator | Gathering Facts --------------------------------------------------------- 3.57s 2025-09-16 00:24:40.115880 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 2.17s 2025-09-16 00:24:40.115891 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.32s 2025-09-16 00:24:40.115903 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.21s 2025-09-16 00:24:40.115914 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2025-09-16 00:24:40.115926 | orchestrator | Do not require tty for all users ---------------------------------------- 0.68s 2025-09-16 00:24:40.115938 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.66s 2025-09-16 00:24:40.115950 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2025-09-16 00:24:40.115961 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.56s 2025-09-16 00:24:40.115974 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.52s 2025-09-16 00:24:40.115987 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2025-09-16 00:24:40.115999 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-09-16 00:24:40.116011 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2025-09-16 00:24:40.116023 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2025-09-16 00:24:40.116035 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.13s 2025-09-16 00:24:40.116046 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.12s 2025-09-16 00:24:40.116058 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.11s 2025-09-16 00:24:40.116070 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.10s 2025-09-16 00:24:40.294253 | orchestrator | + osism apply --environment custom facts 2025-09-16 00:24:41.956191 | orchestrator | 2025-09-16 00:24:41 | INFO  | Trying to run play facts in environment custom 2025-09-16 00:24:52.086128 | orchestrator | 2025-09-16 00:24:52 | INFO  | Task 6522401d-20a5-409a-861e-e04fc03547f9 (facts) was prepared for execution. 2025-09-16 00:24:52.086267 | orchestrator | 2025-09-16 00:24:52 | INFO  | It takes a moment until task 6522401d-20a5-409a-861e-e04fc03547f9 (facts) has been started and output is visible here. 2025-09-16 00:25:34.718524 | orchestrator | 2025-09-16 00:25:34.718641 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-16 00:25:34.718661 | orchestrator | 2025-09-16 00:25:34.718674 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-16 00:25:34.718718 | orchestrator | Tuesday 16 September 2025 00:24:55 +0000 (0:00:00.085) 0:00:00.085 ***** 2025-09-16 00:25:34.718730 | orchestrator | ok: [testbed-manager] 2025-09-16 00:25:34.718742 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:25:34.718755 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:25:34.718766 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:25:34.718777 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:25:34.718788 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:25:34.718799 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:25:34.718810 | orchestrator | 2025-09-16 00:25:34.718821 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-16 00:25:34.718832 | orchestrator | Tuesday 16 September 2025 00:24:57 +0000 (0:00:01.322) 0:00:01.407 ***** 2025-09-16 00:25:34.718843 | orchestrator | ok: [testbed-manager] 2025-09-16 00:25:34.718854 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:25:34.718865 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:25:34.718876 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:25:34.718886 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:25:34.718897 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:25:34.718908 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:25:34.718918 | orchestrator | 2025-09-16 00:25:34.718929 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-16 00:25:34.718940 | orchestrator | 2025-09-16 00:25:34.718951 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-16 00:25:34.718961 | orchestrator | Tuesday 16 September 2025 00:24:58 +0000 (0:00:01.120) 0:00:02.528 ***** 2025-09-16 00:25:34.718972 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:25:34.718983 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:25:34.718994 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:25:34.719005 | orchestrator | 2025-09-16 00:25:34.719016 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-16 00:25:34.719027 | orchestrator | Tuesday 16 September 2025 00:24:58 +0000 (0:00:00.124) 0:00:02.653 ***** 2025-09-16 00:25:34.719038 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:25:34.719049 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:25:34.719060 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:25:34.719070 | orchestrator | 2025-09-16 00:25:34.719081 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-16 00:25:34.719092 | orchestrator | Tuesday 16 September 2025 00:24:58 +0000 (0:00:00.202) 0:00:02.855 ***** 2025-09-16 00:25:34.719103 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:25:34.719114 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:25:34.719125 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:25:34.719136 | orchestrator | 2025-09-16 00:25:34.719147 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-16 00:25:34.719158 | orchestrator | Tuesday 16 September 2025 00:24:58 +0000 (0:00:00.194) 0:00:03.049 ***** 2025-09-16 00:25:34.719170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:25:34.719183 | orchestrator | 2025-09-16 00:25:34.719194 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-16 00:25:34.719205 | orchestrator | Tuesday 16 September 2025 00:24:59 +0000 (0:00:00.134) 0:00:03.184 ***** 2025-09-16 00:25:34.719235 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:25:34.719246 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:25:34.719257 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:25:34.719267 | orchestrator | 2025-09-16 00:25:34.719285 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-16 00:25:34.719309 | orchestrator | Tuesday 16 September 2025 00:24:59 +0000 (0:00:00.483) 0:00:03.667 ***** 2025-09-16 00:25:34.719337 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:25:34.719357 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:25:34.719375 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:25:34.719394 | orchestrator | 2025-09-16 00:25:34.719415 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-16 00:25:34.719434 | orchestrator | Tuesday 16 September 2025 00:24:59 +0000 (0:00:00.120) 0:00:03.788 ***** 2025-09-16 00:25:34.719454 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:25:34.719466 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:25:34.719477 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:25:34.719487 | orchestrator | 2025-09-16 00:25:34.719497 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-16 00:25:34.719508 | orchestrator | Tuesday 16 September 2025 00:25:00 +0000 (0:00:01.086) 0:00:04.875 ***** 2025-09-16 00:25:34.719531 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:25:34.719543 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:25:34.719553 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:25:34.719564 | orchestrator | 2025-09-16 00:25:34.719575 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-16 00:25:34.719590 | orchestrator | Tuesday 16 September 2025 00:25:01 +0000 (0:00:00.490) 0:00:05.365 ***** 2025-09-16 00:25:34.719601 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:25:34.719667 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:25:34.719706 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:25:34.719718 | orchestrator | 2025-09-16 00:25:34.719729 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-16 00:25:34.719739 | orchestrator | Tuesday 16 September 2025 00:25:02 +0000 (0:00:01.023) 0:00:06.389 ***** 2025-09-16 00:25:34.719751 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:25:34.719761 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:25:34.719772 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:25:34.719782 | orchestrator | 2025-09-16 00:25:34.719793 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-16 00:25:34.719804 | orchestrator | Tuesday 16 September 2025 00:25:18 +0000 (0:00:16.677) 0:00:23.066 ***** 2025-09-16 00:25:34.719815 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:25:34.719825 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:25:34.719836 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:25:34.719846 | orchestrator | 2025-09-16 00:25:34.719857 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-16 00:25:34.719885 | orchestrator | Tuesday 16 September 2025 00:25:18 +0000 (0:00:00.100) 0:00:23.167 ***** 2025-09-16 00:25:34.719897 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:25:34.719908 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:25:34.719918 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:25:34.719929 | orchestrator | 2025-09-16 00:25:34.719940 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-16 00:25:34.719951 | orchestrator | Tuesday 16 September 2025 00:25:25 +0000 (0:00:06.430) 0:00:29.598 ***** 2025-09-16 00:25:34.719961 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:25:34.719972 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:25:34.719983 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:25:34.719993 | orchestrator | 2025-09-16 00:25:34.720004 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-16 00:25:34.720015 | orchestrator | Tuesday 16 September 2025 00:25:25 +0000 (0:00:00.442) 0:00:30.041 ***** 2025-09-16 00:25:34.720026 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-16 00:25:34.720047 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-16 00:25:34.720057 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-16 00:25:34.720068 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-16 00:25:34.720079 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-16 00:25:34.720090 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-16 00:25:34.720100 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-16 00:25:34.720111 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-16 00:25:34.720121 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-16 00:25:34.720132 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-16 00:25:34.720143 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-16 00:25:34.720153 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-16 00:25:34.720164 | orchestrator | 2025-09-16 00:25:34.720174 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-16 00:25:34.720185 | orchestrator | Tuesday 16 September 2025 00:25:29 +0000 (0:00:03.607) 0:00:33.648 ***** 2025-09-16 00:25:34.720196 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:25:34.720206 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:25:34.720217 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:25:34.720228 | orchestrator | 2025-09-16 00:25:34.720238 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-16 00:25:34.720249 | orchestrator | 2025-09-16 00:25:34.720260 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-16 00:25:34.720271 | orchestrator | Tuesday 16 September 2025 00:25:30 +0000 (0:00:01.333) 0:00:34.982 ***** 2025-09-16 00:25:34.720281 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:25:34.720292 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:25:34.720303 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:25:34.720313 | orchestrator | ok: [testbed-manager] 2025-09-16 00:25:34.720324 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:25:34.720334 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:25:34.720345 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:25:34.720355 | orchestrator | 2025-09-16 00:25:34.720366 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:25:34.720378 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:25:34.720389 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:25:34.720401 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:25:34.720412 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:25:34.720423 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:25:34.720434 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:25:34.720450 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:25:34.720460 | orchestrator | 2025-09-16 00:25:34.720471 | orchestrator | 2025-09-16 00:25:34.720489 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:25:34.720508 | orchestrator | Tuesday 16 September 2025 00:25:34 +0000 (0:00:03.880) 0:00:38.862 ***** 2025-09-16 00:25:34.720525 | orchestrator | =============================================================================== 2025-09-16 00:25:34.720552 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.68s 2025-09-16 00:25:34.720568 | orchestrator | Install required packages (Debian) -------------------------------------- 6.43s 2025-09-16 00:25:34.720586 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.88s 2025-09-16 00:25:34.720606 | orchestrator | Copy fact files --------------------------------------------------------- 3.61s 2025-09-16 00:25:34.720625 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.33s 2025-09-16 00:25:34.720644 | orchestrator | Create custom facts directory ------------------------------------------- 1.32s 2025-09-16 00:25:34.720672 | orchestrator | Copy fact file ---------------------------------------------------------- 1.12s 2025-09-16 00:25:34.911296 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.09s 2025-09-16 00:25:34.911361 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2025-09-16 00:25:34.911374 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.49s 2025-09-16 00:25:34.911384 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.48s 2025-09-16 00:25:34.911395 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2025-09-16 00:25:34.911406 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-09-16 00:25:34.911417 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2025-09-16 00:25:34.911428 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-09-16 00:25:34.911439 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-09-16 00:25:34.911450 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-09-16 00:25:34.911460 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-09-16 00:25:35.189512 | orchestrator | + osism apply bootstrap 2025-09-16 00:25:47.144847 | orchestrator | 2025-09-16 00:25:47 | INFO  | Task d6f8631d-7471-4fde-8feb-0e3d8bf9fa07 (bootstrap) was prepared for execution. 2025-09-16 00:25:47.144925 | orchestrator | 2025-09-16 00:25:47 | INFO  | It takes a moment until task d6f8631d-7471-4fde-8feb-0e3d8bf9fa07 (bootstrap) has been started and output is visible here. 2025-09-16 00:26:02.390322 | orchestrator | 2025-09-16 00:26:02.390437 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-16 00:26:02.390455 | orchestrator | 2025-09-16 00:26:02.390467 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-16 00:26:02.390479 | orchestrator | Tuesday 16 September 2025 00:25:51 +0000 (0:00:00.121) 0:00:00.121 ***** 2025-09-16 00:26:02.390490 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:02.390503 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:02.390514 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:02.390525 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:02.390536 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:02.390547 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:02.390557 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:02.390568 | orchestrator | 2025-09-16 00:26:02.390579 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-16 00:26:02.390590 | orchestrator | 2025-09-16 00:26:02.390602 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-16 00:26:02.390613 | orchestrator | Tuesday 16 September 2025 00:25:51 +0000 (0:00:00.159) 0:00:00.280 ***** 2025-09-16 00:26:02.390624 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:02.390635 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:02.390645 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:02.390656 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:02.390667 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:02.390678 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:02.390723 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:02.390763 | orchestrator | 2025-09-16 00:26:02.390775 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-16 00:26:02.390786 | orchestrator | 2025-09-16 00:26:02.390797 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-16 00:26:02.390808 | orchestrator | Tuesday 16 September 2025 00:25:54 +0000 (0:00:03.650) 0:00:03.931 ***** 2025-09-16 00:26:02.390819 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-16 00:26:02.390831 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-16 00:26:02.390843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-16 00:26:02.390855 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-16 00:26:02.390867 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-16 00:26:02.390880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:26:02.390893 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-16 00:26:02.390905 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-16 00:26:02.390918 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-16 00:26:02.390931 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-16 00:26:02.390943 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-16 00:26:02.390957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:26:02.390969 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-16 00:26:02.390979 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-16 00:26:02.390990 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-16 00:26:02.391001 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-16 00:26:02.391038 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:26:02.391065 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-16 00:26:02.391077 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:26:02.391088 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-16 00:26:02.391098 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-16 00:26:02.391109 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-16 00:26:02.391119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-16 00:26:02.391130 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-16 00:26:02.391141 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-16 00:26:02.391151 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-16 00:26:02.391178 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-16 00:26:02.391190 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:26:02.391200 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-16 00:26:02.391211 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-16 00:26:02.391222 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-16 00:26:02.391232 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-16 00:26:02.391243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-16 00:26:02.391253 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-16 00:26:02.391264 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:26:02.391274 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-16 00:26:02.391285 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:26:02.391296 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-16 00:26:02.391306 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-16 00:26:02.391317 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-16 00:26:02.391327 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-16 00:26:02.391346 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-16 00:26:02.391357 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-16 00:26:02.391369 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-16 00:26:02.391379 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-16 00:26:02.391390 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-16 00:26:02.391420 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-16 00:26:02.391431 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:26:02.391442 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-16 00:26:02.391453 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-16 00:26:02.391464 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-16 00:26:02.391491 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:26:02.391513 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-16 00:26:02.391524 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-16 00:26:02.391535 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-16 00:26:02.391546 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:26:02.391556 | orchestrator | 2025-09-16 00:26:02.391567 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-16 00:26:02.391578 | orchestrator | 2025-09-16 00:26:02.391589 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-16 00:26:02.391600 | orchestrator | Tuesday 16 September 2025 00:25:55 +0000 (0:00:00.464) 0:00:04.395 ***** 2025-09-16 00:26:02.391610 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:02.391621 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:02.391632 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:02.391642 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:02.391653 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:02.391664 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:02.391674 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:02.391737 | orchestrator | 2025-09-16 00:26:02.391754 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-16 00:26:02.391766 | orchestrator | Tuesday 16 September 2025 00:25:56 +0000 (0:00:01.255) 0:00:05.651 ***** 2025-09-16 00:26:02.391777 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:02.391787 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:02.391798 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:02.391809 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:02.391819 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:02.391830 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:02.391840 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:02.391851 | orchestrator | 2025-09-16 00:26:02.391862 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-16 00:26:02.391873 | orchestrator | Tuesday 16 September 2025 00:25:57 +0000 (0:00:01.219) 0:00:06.871 ***** 2025-09-16 00:26:02.391885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:26:02.391898 | orchestrator | 2025-09-16 00:26:02.391909 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-16 00:26:02.391920 | orchestrator | Tuesday 16 September 2025 00:25:58 +0000 (0:00:00.254) 0:00:07.126 ***** 2025-09-16 00:26:02.391930 | orchestrator | changed: [testbed-manager] 2025-09-16 00:26:02.391941 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:26:02.391958 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:26:02.391970 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:26:02.391980 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:26:02.391991 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:26:02.392002 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:26:02.392012 | orchestrator | 2025-09-16 00:26:02.392031 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-16 00:26:02.392042 | orchestrator | Tuesday 16 September 2025 00:26:00 +0000 (0:00:01.996) 0:00:09.122 ***** 2025-09-16 00:26:02.392053 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:26:02.392065 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:26:02.392078 | orchestrator | 2025-09-16 00:26:02.392089 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-16 00:26:02.392099 | orchestrator | Tuesday 16 September 2025 00:26:00 +0000 (0:00:00.255) 0:00:09.377 ***** 2025-09-16 00:26:02.392110 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:26:02.392121 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:26:02.392131 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:26:02.392142 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:26:02.392152 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:26:02.392163 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:26:02.392174 | orchestrator | 2025-09-16 00:26:02.392184 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-16 00:26:02.392195 | orchestrator | Tuesday 16 September 2025 00:26:01 +0000 (0:00:00.986) 0:00:10.364 ***** 2025-09-16 00:26:02.392206 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:26:02.392216 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:26:02.392227 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:26:02.392237 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:26:02.392248 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:26:02.392258 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:26:02.392269 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:26:02.392280 | orchestrator | 2025-09-16 00:26:02.392290 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-16 00:26:02.392301 | orchestrator | Tuesday 16 September 2025 00:26:01 +0000 (0:00:00.574) 0:00:10.939 ***** 2025-09-16 00:26:02.392312 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:26:02.392322 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:26:02.392333 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:26:02.392344 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:26:02.392354 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:26:02.392365 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:26:02.392375 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:02.392386 | orchestrator | 2025-09-16 00:26:02.392397 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-16 00:26:02.392410 | orchestrator | Tuesday 16 September 2025 00:26:02 +0000 (0:00:00.411) 0:00:11.350 ***** 2025-09-16 00:26:02.392421 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:26:02.392432 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:26:02.392451 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:26:14.668130 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:26:14.668248 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:26:14.668264 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:26:14.668276 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:26:14.668287 | orchestrator | 2025-09-16 00:26:14.668301 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-16 00:26:14.668313 | orchestrator | Tuesday 16 September 2025 00:26:02 +0000 (0:00:00.203) 0:00:11.554 ***** 2025-09-16 00:26:14.668326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:26:14.668355 | orchestrator | 2025-09-16 00:26:14.668366 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-16 00:26:14.668378 | orchestrator | Tuesday 16 September 2025 00:26:02 +0000 (0:00:00.265) 0:00:11.820 ***** 2025-09-16 00:26:14.668414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:26:14.668426 | orchestrator | 2025-09-16 00:26:14.668437 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-16 00:26:14.668448 | orchestrator | Tuesday 16 September 2025 00:26:03 +0000 (0:00:00.295) 0:00:12.115 ***** 2025-09-16 00:26:14.668459 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:14.668471 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:14.668482 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:14.668493 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:14.668503 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:14.668514 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:14.668525 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:14.668535 | orchestrator | 2025-09-16 00:26:14.668546 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-16 00:26:14.668557 | orchestrator | Tuesday 16 September 2025 00:26:04 +0000 (0:00:01.506) 0:00:13.622 ***** 2025-09-16 00:26:14.668568 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:26:14.668579 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:26:14.668590 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:26:14.668601 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:26:14.668611 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:26:14.668622 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:26:14.668633 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:26:14.668643 | orchestrator | 2025-09-16 00:26:14.668654 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-16 00:26:14.668665 | orchestrator | Tuesday 16 September 2025 00:26:04 +0000 (0:00:00.219) 0:00:13.842 ***** 2025-09-16 00:26:14.668678 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:14.668691 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:14.668741 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:14.668754 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:14.668766 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:14.668778 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:14.668791 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:14.668803 | orchestrator | 2025-09-16 00:26:14.668815 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-16 00:26:14.668828 | orchestrator | Tuesday 16 September 2025 00:26:05 +0000 (0:00:00.564) 0:00:14.406 ***** 2025-09-16 00:26:14.668840 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:26:14.668852 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:26:14.668865 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:26:14.668877 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:26:14.668889 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:26:14.668901 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:26:14.668913 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:26:14.668924 | orchestrator | 2025-09-16 00:26:14.668937 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-16 00:26:14.668951 | orchestrator | Tuesday 16 September 2025 00:26:05 +0000 (0:00:00.225) 0:00:14.632 ***** 2025-09-16 00:26:14.668963 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:14.668975 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:26:14.668987 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:26:14.668999 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:26:14.669012 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:26:14.669023 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:26:14.669035 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:26:14.669046 | orchestrator | 2025-09-16 00:26:14.669056 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-16 00:26:14.669067 | orchestrator | Tuesday 16 September 2025 00:26:06 +0000 (0:00:00.573) 0:00:15.205 ***** 2025-09-16 00:26:14.669087 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:14.669097 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:26:14.669108 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:26:14.669119 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:26:14.669129 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:26:14.669140 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:26:14.669150 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:26:14.669161 | orchestrator | 2025-09-16 00:26:14.669172 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-16 00:26:14.669182 | orchestrator | Tuesday 16 September 2025 00:26:07 +0000 (0:00:01.141) 0:00:16.347 ***** 2025-09-16 00:26:14.669193 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:14.669203 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:14.669214 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:14.669224 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:14.669236 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:14.669246 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:14.669257 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:14.669268 | orchestrator | 2025-09-16 00:26:14.669278 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-16 00:26:14.669290 | orchestrator | Tuesday 16 September 2025 00:26:08 +0000 (0:00:01.141) 0:00:17.489 ***** 2025-09-16 00:26:14.669319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:26:14.669331 | orchestrator | 2025-09-16 00:26:14.669342 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-16 00:26:14.669352 | orchestrator | Tuesday 16 September 2025 00:26:08 +0000 (0:00:00.380) 0:00:17.869 ***** 2025-09-16 00:26:14.669363 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:26:14.669374 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:26:14.669384 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:26:14.669395 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:26:14.669405 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:26:14.669416 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:26:14.669427 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:26:14.669437 | orchestrator | 2025-09-16 00:26:14.669448 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-16 00:26:14.669459 | orchestrator | Tuesday 16 September 2025 00:26:10 +0000 (0:00:01.303) 0:00:19.172 ***** 2025-09-16 00:26:14.669469 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:14.669480 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:14.669490 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:14.669501 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:14.669512 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:14.669564 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:14.669577 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:14.669587 | orchestrator | 2025-09-16 00:26:14.669598 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-16 00:26:14.669609 | orchestrator | Tuesday 16 September 2025 00:26:10 +0000 (0:00:00.207) 0:00:19.380 ***** 2025-09-16 00:26:14.669620 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:14.669630 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:14.669641 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:14.669651 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:14.669662 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:14.669673 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:14.669683 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:14.669709 | orchestrator | 2025-09-16 00:26:14.669721 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-16 00:26:14.669731 | orchestrator | Tuesday 16 September 2025 00:26:10 +0000 (0:00:00.204) 0:00:19.584 ***** 2025-09-16 00:26:14.669742 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:14.669752 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:14.669771 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:14.669782 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:14.669792 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:14.669803 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:14.669813 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:14.669824 | orchestrator | 2025-09-16 00:26:14.669835 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-16 00:26:14.669846 | orchestrator | Tuesday 16 September 2025 00:26:10 +0000 (0:00:00.200) 0:00:19.784 ***** 2025-09-16 00:26:14.669862 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:26:14.669875 | orchestrator | 2025-09-16 00:26:14.669886 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-16 00:26:14.669896 | orchestrator | Tuesday 16 September 2025 00:26:10 +0000 (0:00:00.293) 0:00:20.077 ***** 2025-09-16 00:26:14.669907 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:14.669917 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:14.669928 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:14.669938 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:14.669949 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:14.669959 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:14.669970 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:14.669980 | orchestrator | 2025-09-16 00:26:14.669991 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-16 00:26:14.670002 | orchestrator | Tuesday 16 September 2025 00:26:11 +0000 (0:00:00.529) 0:00:20.607 ***** 2025-09-16 00:26:14.670066 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:26:14.670081 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:26:14.670092 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:26:14.670102 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:26:14.670113 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:26:14.670123 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:26:14.670134 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:26:14.670144 | orchestrator | 2025-09-16 00:26:14.670155 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-16 00:26:14.670165 | orchestrator | Tuesday 16 September 2025 00:26:11 +0000 (0:00:00.249) 0:00:20.857 ***** 2025-09-16 00:26:14.670176 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:14.670187 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:14.670197 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:14.670208 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:14.670218 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:26:14.670229 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:26:14.670239 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:26:14.670250 | orchestrator | 2025-09-16 00:26:14.670260 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-16 00:26:14.670271 | orchestrator | Tuesday 16 September 2025 00:26:12 +0000 (0:00:01.122) 0:00:21.979 ***** 2025-09-16 00:26:14.670282 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:14.670292 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:14.670303 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:14.670314 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:14.670324 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:14.670335 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:14.670345 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:14.670356 | orchestrator | 2025-09-16 00:26:14.670367 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-16 00:26:14.670378 | orchestrator | Tuesday 16 September 2025 00:26:13 +0000 (0:00:00.563) 0:00:22.543 ***** 2025-09-16 00:26:14.670388 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:14.670399 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:14.670410 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:14.670420 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:14.670449 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:26:59.032487 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:26:59.032603 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:26:59.032620 | orchestrator | 2025-09-16 00:26:59.032634 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-16 00:26:59.032647 | orchestrator | Tuesday 16 September 2025 00:26:14 +0000 (0:00:01.204) 0:00:23.747 ***** 2025-09-16 00:26:59.032658 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:59.032670 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:59.032682 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:59.032692 | orchestrator | changed: [testbed-manager] 2025-09-16 00:26:59.032703 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:26:59.032714 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:26:59.032771 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:26:59.032784 | orchestrator | 2025-09-16 00:26:59.032795 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-16 00:26:59.032806 | orchestrator | Tuesday 16 September 2025 00:26:34 +0000 (0:00:19.652) 0:00:43.400 ***** 2025-09-16 00:26:59.032817 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:59.032828 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:59.032839 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:59.032850 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:59.032861 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:59.032872 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:59.032882 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:59.032893 | orchestrator | 2025-09-16 00:26:59.032904 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-16 00:26:59.032915 | orchestrator | Tuesday 16 September 2025 00:26:34 +0000 (0:00:00.235) 0:00:43.636 ***** 2025-09-16 00:26:59.032926 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:59.032937 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:59.032947 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:59.032958 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:59.032969 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:59.032980 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:59.032992 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:59.033004 | orchestrator | 2025-09-16 00:26:59.033017 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-16 00:26:59.033029 | orchestrator | Tuesday 16 September 2025 00:26:34 +0000 (0:00:00.223) 0:00:43.860 ***** 2025-09-16 00:26:59.033042 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:59.033054 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:59.033067 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:59.033080 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:59.033093 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:59.033112 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:59.033131 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:59.033151 | orchestrator | 2025-09-16 00:26:59.033170 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-16 00:26:59.033189 | orchestrator | Tuesday 16 September 2025 00:26:35 +0000 (0:00:00.239) 0:00:44.099 ***** 2025-09-16 00:26:59.033231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:26:59.033255 | orchestrator | 2025-09-16 00:26:59.033273 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-16 00:26:59.033286 | orchestrator | Tuesday 16 September 2025 00:26:35 +0000 (0:00:00.283) 0:00:44.382 ***** 2025-09-16 00:26:59.033299 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:59.033312 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:59.033324 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:59.033336 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:59.033346 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:59.033357 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:59.033387 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:59.033398 | orchestrator | 2025-09-16 00:26:59.033409 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-16 00:26:59.033420 | orchestrator | Tuesday 16 September 2025 00:26:37 +0000 (0:00:01.927) 0:00:46.310 ***** 2025-09-16 00:26:59.033430 | orchestrator | changed: [testbed-manager] 2025-09-16 00:26:59.033441 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:26:59.033452 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:26:59.033462 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:26:59.033472 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:26:59.033483 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:26:59.033493 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:26:59.033504 | orchestrator | 2025-09-16 00:26:59.033515 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-16 00:26:59.033525 | orchestrator | Tuesday 16 September 2025 00:26:38 +0000 (0:00:01.106) 0:00:47.416 ***** 2025-09-16 00:26:59.033536 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:59.033547 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:59.033558 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:59.033568 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:59.033579 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:59.033589 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:59.033600 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:59.033610 | orchestrator | 2025-09-16 00:26:59.033621 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-16 00:26:59.033631 | orchestrator | Tuesday 16 September 2025 00:26:39 +0000 (0:00:00.935) 0:00:48.352 ***** 2025-09-16 00:26:59.033643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:26:59.033656 | orchestrator | 2025-09-16 00:26:59.033667 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-16 00:26:59.033679 | orchestrator | Tuesday 16 September 2025 00:26:39 +0000 (0:00:00.314) 0:00:48.667 ***** 2025-09-16 00:26:59.033689 | orchestrator | changed: [testbed-manager] 2025-09-16 00:26:59.033700 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:26:59.033710 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:26:59.033721 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:26:59.033765 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:26:59.033776 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:26:59.033787 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:26:59.033798 | orchestrator | 2025-09-16 00:26:59.033830 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-16 00:26:59.033841 | orchestrator | Tuesday 16 September 2025 00:26:40 +0000 (0:00:01.123) 0:00:49.791 ***** 2025-09-16 00:26:59.033852 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:26:59.033862 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:26:59.033873 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:26:59.033883 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:26:59.033894 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:26:59.033904 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:26:59.033915 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:26:59.033925 | orchestrator | 2025-09-16 00:26:59.033936 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-16 00:26:59.033947 | orchestrator | Tuesday 16 September 2025 00:26:40 +0000 (0:00:00.298) 0:00:50.089 ***** 2025-09-16 00:26:59.033957 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:26:59.033968 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:26:59.033978 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:26:59.033989 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:26:59.033999 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:26:59.034010 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:26:59.034076 | orchestrator | changed: [testbed-manager] 2025-09-16 00:26:59.034097 | orchestrator | 2025-09-16 00:26:59.034107 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-16 00:26:59.034118 | orchestrator | Tuesday 16 September 2025 00:26:53 +0000 (0:00:12.384) 0:01:02.474 ***** 2025-09-16 00:26:59.034129 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:59.034140 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:59.034150 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:59.034161 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:59.034171 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:59.034182 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:59.034193 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:59.034203 | orchestrator | 2025-09-16 00:26:59.034214 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-16 00:26:59.034225 | orchestrator | Tuesday 16 September 2025 00:26:54 +0000 (0:00:01.278) 0:01:03.753 ***** 2025-09-16 00:26:59.034235 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:59.034246 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:59.034257 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:59.034275 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:59.034294 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:59.034313 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:59.034331 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:59.034350 | orchestrator | 2025-09-16 00:26:59.034370 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-16 00:26:59.034389 | orchestrator | Tuesday 16 September 2025 00:26:55 +0000 (0:00:00.943) 0:01:04.696 ***** 2025-09-16 00:26:59.034407 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:59.034426 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:59.034444 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:59.034463 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:59.034479 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:59.034491 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:59.034507 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:59.034526 | orchestrator | 2025-09-16 00:26:59.034544 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-16 00:26:59.034563 | orchestrator | Tuesday 16 September 2025 00:26:55 +0000 (0:00:00.217) 0:01:04.914 ***** 2025-09-16 00:26:59.034582 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:59.034600 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:59.034618 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:59.034637 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:59.034656 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:59.034668 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:59.034679 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:59.034689 | orchestrator | 2025-09-16 00:26:59.034700 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-16 00:26:59.034711 | orchestrator | Tuesday 16 September 2025 00:26:56 +0000 (0:00:00.261) 0:01:05.176 ***** 2025-09-16 00:26:59.034722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:26:59.034766 | orchestrator | 2025-09-16 00:26:59.034777 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-16 00:26:59.034788 | orchestrator | Tuesday 16 September 2025 00:26:56 +0000 (0:00:00.330) 0:01:05.506 ***** 2025-09-16 00:26:59.034799 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:59.034809 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:59.034820 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:59.034831 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:59.034841 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:59.034851 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:59.034862 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:59.034873 | orchestrator | 2025-09-16 00:26:59.034883 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-16 00:26:59.034894 | orchestrator | Tuesday 16 September 2025 00:26:58 +0000 (0:00:01.823) 0:01:07.330 ***** 2025-09-16 00:26:59.034915 | orchestrator | changed: [testbed-manager] 2025-09-16 00:26:59.034926 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:26:59.034937 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:26:59.034947 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:26:59.034958 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:26:59.034968 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:26:59.034979 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:26:59.034989 | orchestrator | 2025-09-16 00:26:59.035000 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-16 00:26:59.035011 | orchestrator | Tuesday 16 September 2025 00:26:58 +0000 (0:00:00.547) 0:01:07.877 ***** 2025-09-16 00:26:59.035022 | orchestrator | ok: [testbed-manager] 2025-09-16 00:26:59.035033 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:26:59.035044 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:26:59.035054 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:26:59.035065 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:26:59.035076 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:26:59.035086 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:26:59.035096 | orchestrator | 2025-09-16 00:26:59.035118 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-16 00:29:14.555506 | orchestrator | Tuesday 16 September 2025 00:26:59 +0000 (0:00:00.236) 0:01:08.113 ***** 2025-09-16 00:29:14.555625 | orchestrator | ok: [testbed-manager] 2025-09-16 00:29:14.555642 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:29:14.555654 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:29:14.555665 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:29:14.555675 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:29:14.555686 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:29:14.555697 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:29:14.555708 | orchestrator | 2025-09-16 00:29:14.555792 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-16 00:29:14.555806 | orchestrator | Tuesday 16 September 2025 00:27:00 +0000 (0:00:01.143) 0:01:09.257 ***** 2025-09-16 00:29:14.555818 | orchestrator | changed: [testbed-manager] 2025-09-16 00:29:14.555829 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:29:14.555840 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:29:14.555851 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:29:14.555862 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:29:14.555873 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:29:14.555884 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:29:14.555895 | orchestrator | 2025-09-16 00:29:14.555906 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-16 00:29:14.555918 | orchestrator | Tuesday 16 September 2025 00:27:01 +0000 (0:00:01.555) 0:01:10.813 ***** 2025-09-16 00:29:14.555928 | orchestrator | ok: [testbed-manager] 2025-09-16 00:29:14.555939 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:29:14.555950 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:29:14.555960 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:29:14.555971 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:29:14.555981 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:29:14.555992 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:29:14.556003 | orchestrator | 2025-09-16 00:29:14.556013 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-16 00:29:14.556024 | orchestrator | Tuesday 16 September 2025 00:27:04 +0000 (0:00:02.376) 0:01:13.189 ***** 2025-09-16 00:29:14.556035 | orchestrator | ok: [testbed-manager] 2025-09-16 00:29:14.556048 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:29:14.556060 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:29:14.556072 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:29:14.556084 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:29:14.556096 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:29:14.556107 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:29:14.556119 | orchestrator | 2025-09-16 00:29:14.556131 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-16 00:29:14.556170 | orchestrator | Tuesday 16 September 2025 00:27:42 +0000 (0:00:38.265) 0:01:51.454 ***** 2025-09-16 00:29:14.556182 | orchestrator | changed: [testbed-manager] 2025-09-16 00:29:14.556194 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:29:14.556206 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:29:14.556218 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:29:14.556230 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:29:14.556242 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:29:14.556255 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:29:14.556267 | orchestrator | 2025-09-16 00:29:14.556284 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-16 00:29:14.556297 | orchestrator | Tuesday 16 September 2025 00:29:00 +0000 (0:01:17.979) 0:03:09.434 ***** 2025-09-16 00:29:14.556310 | orchestrator | ok: [testbed-manager] 2025-09-16 00:29:14.556322 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:29:14.556334 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:29:14.556346 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:29:14.556358 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:29:14.556369 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:29:14.556381 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:29:14.556393 | orchestrator | 2025-09-16 00:29:14.556404 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-16 00:29:14.556415 | orchestrator | Tuesday 16 September 2025 00:29:02 +0000 (0:00:01.942) 0:03:11.377 ***** 2025-09-16 00:29:14.556426 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:29:14.556436 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:29:14.556447 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:29:14.556458 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:29:14.556468 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:29:14.556479 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:29:14.556490 | orchestrator | changed: [testbed-manager] 2025-09-16 00:29:14.556500 | orchestrator | 2025-09-16 00:29:14.556511 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-16 00:29:14.556522 | orchestrator | Tuesday 16 September 2025 00:29:13 +0000 (0:00:11.133) 0:03:22.510 ***** 2025-09-16 00:29:14.556542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-16 00:29:14.556564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-16 00:29:14.556600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-16 00:29:14.556614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-16 00:29:14.556634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-16 00:29:14.556646 | orchestrator | 2025-09-16 00:29:14.556657 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-16 00:29:14.556668 | orchestrator | Tuesday 16 September 2025 00:29:13 +0000 (0:00:00.351) 0:03:22.862 ***** 2025-09-16 00:29:14.556678 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-16 00:29:14.556689 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:29:14.556700 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-16 00:29:14.556710 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-16 00:29:14.556721 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:29:14.556750 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:29:14.556762 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-16 00:29:14.556772 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:29:14.556783 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-16 00:29:14.556794 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-16 00:29:14.556804 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-16 00:29:14.556815 | orchestrator | 2025-09-16 00:29:14.556826 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-16 00:29:14.556842 | orchestrator | Tuesday 16 September 2025 00:29:14 +0000 (0:00:00.662) 0:03:23.524 ***** 2025-09-16 00:29:14.556853 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-16 00:29:14.556865 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-16 00:29:14.556876 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-16 00:29:14.556886 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-16 00:29:14.556897 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-16 00:29:14.556907 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-16 00:29:14.556918 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-16 00:29:14.556928 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-16 00:29:14.556939 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-16 00:29:14.556949 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-16 00:29:14.556960 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-16 00:29:14.556970 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-16 00:29:14.556981 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-16 00:29:14.556992 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-16 00:29:14.557002 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-16 00:29:14.557013 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-16 00:29:14.557023 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-16 00:29:14.557040 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-16 00:29:14.557051 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-16 00:29:14.557062 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-16 00:29:14.557082 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-16 00:29:21.313167 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-16 00:29:21.313272 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-16 00:29:21.313287 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-16 00:29:21.313299 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-16 00:29:21.313311 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:29:21.313324 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-16 00:29:21.313335 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-16 00:29:21.313346 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-16 00:29:21.313357 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-16 00:29:21.313368 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-16 00:29:21.313379 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-16 00:29:21.313390 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:29:21.313401 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-16 00:29:21.313412 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-16 00:29:21.313422 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-16 00:29:21.313433 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-16 00:29:21.313444 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-16 00:29:21.313455 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-16 00:29:21.313466 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-16 00:29:21.313476 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-16 00:29:21.313487 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:29:21.313499 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-16 00:29:21.313510 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:29:21.313521 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-16 00:29:21.313532 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-16 00:29:21.313543 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-16 00:29:21.313554 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-16 00:29:21.313565 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-16 00:29:21.313575 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-16 00:29:21.313586 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-16 00:29:21.313620 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-16 00:29:21.313631 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-16 00:29:21.313642 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-16 00:29:21.313653 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-16 00:29:21.313664 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-16 00:29:21.313674 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-16 00:29:21.313685 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-16 00:29:21.313695 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-16 00:29:21.313708 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-16 00:29:21.313721 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-16 00:29:21.313765 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-16 00:29:21.313778 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-16 00:29:21.313790 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-16 00:29:21.313802 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-16 00:29:21.313831 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-16 00:29:21.313844 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-16 00:29:21.313856 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-16 00:29:21.313869 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-16 00:29:21.313882 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-16 00:29:21.313894 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-16 00:29:21.313908 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-16 00:29:21.313920 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-16 00:29:21.313932 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-16 00:29:21.313944 | orchestrator | 2025-09-16 00:29:21.313957 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-16 00:29:21.313969 | orchestrator | Tuesday 16 September 2025 00:29:19 +0000 (0:00:04.903) 0:03:28.428 ***** 2025-09-16 00:29:21.313982 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-16 00:29:21.313995 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-16 00:29:21.314082 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-16 00:29:21.314099 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-16 00:29:21.314110 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-16 00:29:21.314121 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-16 00:29:21.314132 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-16 00:29:21.314142 | orchestrator | 2025-09-16 00:29:21.314153 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-16 00:29:21.314172 | orchestrator | Tuesday 16 September 2025 00:29:19 +0000 (0:00:00.658) 0:03:29.087 ***** 2025-09-16 00:29:21.314183 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-16 00:29:21.314194 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:29:21.314210 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-16 00:29:21.314221 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-16 00:29:21.314232 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:29:21.314242 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:29:21.314253 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-16 00:29:21.314264 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:29:21.314274 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-16 00:29:21.314285 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-16 00:29:21.314296 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-16 00:29:21.314306 | orchestrator | 2025-09-16 00:29:21.314317 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-16 00:29:21.314328 | orchestrator | Tuesday 16 September 2025 00:29:20 +0000 (0:00:00.522) 0:03:29.609 ***** 2025-09-16 00:29:21.314338 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-16 00:29:21.314349 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:29:21.314360 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-16 00:29:21.314370 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:29:21.314381 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-16 00:29:21.314392 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:29:21.314402 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-16 00:29:21.314413 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:29:21.314423 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-16 00:29:21.314434 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-16 00:29:21.314444 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-16 00:29:21.314455 | orchestrator | 2025-09-16 00:29:21.314466 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-16 00:29:21.314476 | orchestrator | Tuesday 16 September 2025 00:29:21 +0000 (0:00:00.536) 0:03:30.146 ***** 2025-09-16 00:29:21.314487 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:29:21.314497 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:29:21.314508 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:29:21.314518 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:29:21.314536 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:29:32.578068 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:29:32.578193 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:29:32.578210 | orchestrator | 2025-09-16 00:29:32.578223 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-16 00:29:32.578236 | orchestrator | Tuesday 16 September 2025 00:29:21 +0000 (0:00:00.257) 0:03:30.404 ***** 2025-09-16 00:29:32.578248 | orchestrator | ok: [testbed-manager] 2025-09-16 00:29:32.578261 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:29:32.578273 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:29:32.578284 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:29:32.578320 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:29:32.578332 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:29:32.578343 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:29:32.578353 | orchestrator | 2025-09-16 00:29:32.578365 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-16 00:29:32.578376 | orchestrator | Tuesday 16 September 2025 00:29:26 +0000 (0:00:05.404) 0:03:35.808 ***** 2025-09-16 00:29:32.578387 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-16 00:29:32.578399 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-16 00:29:32.578409 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:29:32.578421 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-16 00:29:32.578432 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:29:32.578443 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-16 00:29:32.578454 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:29:32.578465 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:29:32.578476 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-16 00:29:32.578486 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-16 00:29:32.578498 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:29:32.578512 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:29:32.578523 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-16 00:29:32.578534 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:29:32.578545 | orchestrator | 2025-09-16 00:29:32.578558 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-16 00:29:32.578571 | orchestrator | Tuesday 16 September 2025 00:29:27 +0000 (0:00:00.294) 0:03:36.103 ***** 2025-09-16 00:29:32.578583 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-16 00:29:32.578595 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-16 00:29:32.578607 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-16 00:29:32.578619 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-16 00:29:32.578631 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-16 00:29:32.578643 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-16 00:29:32.578655 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-16 00:29:32.578668 | orchestrator | 2025-09-16 00:29:32.578680 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-16 00:29:32.578707 | orchestrator | Tuesday 16 September 2025 00:29:28 +0000 (0:00:01.187) 0:03:37.290 ***** 2025-09-16 00:29:32.578723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:29:32.578760 | orchestrator | 2025-09-16 00:29:32.578773 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-16 00:29:32.578786 | orchestrator | Tuesday 16 September 2025 00:29:28 +0000 (0:00:00.512) 0:03:37.802 ***** 2025-09-16 00:29:32.578799 | orchestrator | ok: [testbed-manager] 2025-09-16 00:29:32.578811 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:29:32.578823 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:29:32.578836 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:29:32.578848 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:29:32.578859 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:29:32.578870 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:29:32.578881 | orchestrator | 2025-09-16 00:29:32.578892 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-16 00:29:32.578903 | orchestrator | Tuesday 16 September 2025 00:29:29 +0000 (0:00:01.166) 0:03:38.969 ***** 2025-09-16 00:29:32.578914 | orchestrator | ok: [testbed-manager] 2025-09-16 00:29:32.578925 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:29:32.578936 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:29:32.578946 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:29:32.578957 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:29:32.578968 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:29:32.578987 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:29:32.578998 | orchestrator | 2025-09-16 00:29:32.579009 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-16 00:29:32.579020 | orchestrator | Tuesday 16 September 2025 00:29:30 +0000 (0:00:00.597) 0:03:39.566 ***** 2025-09-16 00:29:32.579031 | orchestrator | changed: [testbed-manager] 2025-09-16 00:29:32.579042 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:29:32.579053 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:29:32.579064 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:29:32.579075 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:29:32.579085 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:29:32.579096 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:29:32.579107 | orchestrator | 2025-09-16 00:29:32.579118 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-16 00:29:32.579129 | orchestrator | Tuesday 16 September 2025 00:29:31 +0000 (0:00:00.579) 0:03:40.146 ***** 2025-09-16 00:29:32.579140 | orchestrator | ok: [testbed-manager] 2025-09-16 00:29:32.579150 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:29:32.579161 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:29:32.579172 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:29:32.579183 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:29:32.579194 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:29:32.579205 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:29:32.579215 | orchestrator | 2025-09-16 00:29:32.579226 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-16 00:29:32.579237 | orchestrator | Tuesday 16 September 2025 00:29:31 +0000 (0:00:00.573) 0:03:40.719 ***** 2025-09-16 00:29:32.579282 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757981042.7544627, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 00:29:32.579299 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757981088.438171, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 00:29:32.579312 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757981064.3135502, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 00:29:32.579328 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757981067.130099, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 00:29:32.579340 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757981078.7501123, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 00:29:32.579358 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757981064.5653522, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 00:29:32.579370 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757981074.4775214, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 00:29:32.579390 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 00:29:50.127161 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 00:29:50.127279 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 00:29:50.127296 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 00:29:50.127335 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 00:29:50.127347 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 00:29:50.127358 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 00:29:50.127370 | orchestrator | 2025-09-16 00:29:50.127384 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-16 00:29:50.127397 | orchestrator | Tuesday 16 September 2025 00:29:32 +0000 (0:00:00.936) 0:03:41.656 ***** 2025-09-16 00:29:50.127408 | orchestrator | changed: [testbed-manager] 2025-09-16 00:29:50.127437 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:29:50.127448 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:29:50.127459 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:29:50.127470 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:29:50.127481 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:29:50.127491 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:29:50.127502 | orchestrator | 2025-09-16 00:29:50.127513 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-16 00:29:50.127524 | orchestrator | Tuesday 16 September 2025 00:29:33 +0000 (0:00:01.148) 0:03:42.805 ***** 2025-09-16 00:29:50.127535 | orchestrator | changed: [testbed-manager] 2025-09-16 00:29:50.127546 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:29:50.127557 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:29:50.127567 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:29:50.127594 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:29:50.127606 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:29:50.127616 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:29:50.127627 | orchestrator | 2025-09-16 00:29:50.127638 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-16 00:29:50.127649 | orchestrator | Tuesday 16 September 2025 00:29:34 +0000 (0:00:01.182) 0:03:43.987 ***** 2025-09-16 00:29:50.127660 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:29:50.127671 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:29:50.127681 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:29:50.127694 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:29:50.127706 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:29:50.127718 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:29:50.127730 | orchestrator | changed: [testbed-manager] 2025-09-16 00:29:50.127767 | orchestrator | 2025-09-16 00:29:50.127780 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-16 00:29:50.127793 | orchestrator | Tuesday 16 September 2025 00:29:36 +0000 (0:00:01.846) 0:03:45.833 ***** 2025-09-16 00:29:50.127814 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:29:50.127827 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:29:50.127839 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:29:50.127851 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:29:50.127863 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:29:50.127875 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:29:50.127888 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:29:50.127900 | orchestrator | 2025-09-16 00:29:50.127912 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-16 00:29:50.127925 | orchestrator | Tuesday 16 September 2025 00:29:36 +0000 (0:00:00.259) 0:03:46.092 ***** 2025-09-16 00:29:50.127938 | orchestrator | ok: [testbed-manager] 2025-09-16 00:29:50.127951 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:29:50.127962 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:29:50.127974 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:29:50.127986 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:29:50.127998 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:29:50.128010 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:29:50.128023 | orchestrator | 2025-09-16 00:29:50.128034 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-16 00:29:50.128045 | orchestrator | Tuesday 16 September 2025 00:29:37 +0000 (0:00:00.716) 0:03:46.809 ***** 2025-09-16 00:29:50.128063 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:29:50.128077 | orchestrator | 2025-09-16 00:29:50.128088 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-16 00:29:50.128099 | orchestrator | Tuesday 16 September 2025 00:29:38 +0000 (0:00:00.378) 0:03:47.188 ***** 2025-09-16 00:29:50.128109 | orchestrator | ok: [testbed-manager] 2025-09-16 00:29:50.128120 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:29:50.128131 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:29:50.128141 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:29:50.128152 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:29:50.128163 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:29:50.128173 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:29:50.128184 | orchestrator | 2025-09-16 00:29:50.128195 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-16 00:29:50.128206 | orchestrator | Tuesday 16 September 2025 00:29:46 +0000 (0:00:08.672) 0:03:55.861 ***** 2025-09-16 00:29:50.128216 | orchestrator | ok: [testbed-manager] 2025-09-16 00:29:50.128227 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:29:50.128237 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:29:50.128248 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:29:50.128259 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:29:50.128269 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:29:50.128280 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:29:50.128291 | orchestrator | 2025-09-16 00:29:50.128302 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-16 00:29:50.128313 | orchestrator | Tuesday 16 September 2025 00:29:48 +0000 (0:00:01.396) 0:03:57.257 ***** 2025-09-16 00:29:50.128323 | orchestrator | ok: [testbed-manager] 2025-09-16 00:29:50.128334 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:29:50.128345 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:29:50.128355 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:29:50.128366 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:29:50.128377 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:29:50.128387 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:29:50.128398 | orchestrator | 2025-09-16 00:29:50.128409 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-16 00:29:50.128420 | orchestrator | Tuesday 16 September 2025 00:29:49 +0000 (0:00:01.021) 0:03:58.279 ***** 2025-09-16 00:29:50.128431 | orchestrator | ok: [testbed-manager] 2025-09-16 00:29:50.128448 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:29:50.128459 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:29:50.128470 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:29:50.128480 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:29:50.128491 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:29:50.128501 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:29:50.128512 | orchestrator | 2025-09-16 00:29:50.128523 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-16 00:29:50.128535 | orchestrator | Tuesday 16 September 2025 00:29:49 +0000 (0:00:00.291) 0:03:58.571 ***** 2025-09-16 00:29:50.128546 | orchestrator | ok: [testbed-manager] 2025-09-16 00:29:50.128556 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:29:50.128567 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:29:50.128578 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:29:50.128588 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:29:50.128599 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:29:50.128610 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:29:50.128620 | orchestrator | 2025-09-16 00:29:50.128631 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-16 00:29:50.128642 | orchestrator | Tuesday 16 September 2025 00:29:49 +0000 (0:00:00.380) 0:03:58.952 ***** 2025-09-16 00:29:50.128653 | orchestrator | ok: [testbed-manager] 2025-09-16 00:29:50.128663 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:29:50.128674 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:29:50.128684 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:29:50.128695 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:29:50.128713 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:31:00.250308 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:31:00.250425 | orchestrator | 2025-09-16 00:31:00.250442 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-16 00:31:00.250456 | orchestrator | Tuesday 16 September 2025 00:29:50 +0000 (0:00:00.262) 0:03:59.214 ***** 2025-09-16 00:31:00.250467 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:31:00.250479 | orchestrator | ok: [testbed-manager] 2025-09-16 00:31:00.250490 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:31:00.250501 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:31:00.250511 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:31:00.250522 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:31:00.250533 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:31:00.250543 | orchestrator | 2025-09-16 00:31:00.250554 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-16 00:31:00.250566 | orchestrator | Tuesday 16 September 2025 00:29:55 +0000 (0:00:05.267) 0:04:04.482 ***** 2025-09-16 00:31:00.250579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:31:00.250593 | orchestrator | 2025-09-16 00:31:00.250604 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-16 00:31:00.250615 | orchestrator | Tuesday 16 September 2025 00:29:55 +0000 (0:00:00.385) 0:04:04.867 ***** 2025-09-16 00:31:00.250626 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-16 00:31:00.250637 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-16 00:31:00.250649 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-16 00:31:00.250660 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-16 00:31:00.250671 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:31:00.250682 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-16 00:31:00.250693 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-16 00:31:00.250704 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:31:00.250714 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-16 00:31:00.250725 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-16 00:31:00.250782 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:31:00.250823 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-16 00:31:00.250849 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:31:00.250863 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-16 00:31:00.250876 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-16 00:31:00.250888 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:31:00.250901 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-16 00:31:00.250914 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:31:00.250926 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-16 00:31:00.250938 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-16 00:31:00.250951 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:31:00.250983 | orchestrator | 2025-09-16 00:31:00.250997 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-16 00:31:00.251009 | orchestrator | Tuesday 16 September 2025 00:29:56 +0000 (0:00:00.365) 0:04:05.232 ***** 2025-09-16 00:31:00.251023 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:31:00.251036 | orchestrator | 2025-09-16 00:31:00.251048 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-16 00:31:00.251061 | orchestrator | Tuesday 16 September 2025 00:29:56 +0000 (0:00:00.377) 0:04:05.609 ***** 2025-09-16 00:31:00.251074 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-16 00:31:00.251087 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:31:00.251100 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-16 00:31:00.251113 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:31:00.251126 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-16 00:31:00.251137 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-16 00:31:00.251148 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:31:00.251159 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-16 00:31:00.251169 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:31:00.251180 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:31:00.251191 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-16 00:31:00.251202 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:31:00.251213 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-16 00:31:00.251223 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:31:00.251234 | orchestrator | 2025-09-16 00:31:00.251245 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-16 00:31:00.251256 | orchestrator | Tuesday 16 September 2025 00:29:56 +0000 (0:00:00.299) 0:04:05.909 ***** 2025-09-16 00:31:00.251267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:31:00.251278 | orchestrator | 2025-09-16 00:31:00.251289 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-16 00:31:00.251300 | orchestrator | Tuesday 16 September 2025 00:29:57 +0000 (0:00:00.377) 0:04:06.286 ***** 2025-09-16 00:31:00.251310 | orchestrator | changed: [testbed-manager] 2025-09-16 00:31:00.251339 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:31:00.251351 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:31:00.251362 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:31:00.251373 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:31:00.251384 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:31:00.251395 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:31:00.251405 | orchestrator | 2025-09-16 00:31:00.251416 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-16 00:31:00.251436 | orchestrator | Tuesday 16 September 2025 00:30:31 +0000 (0:00:34.593) 0:04:40.880 ***** 2025-09-16 00:31:00.251447 | orchestrator | changed: [testbed-manager] 2025-09-16 00:31:00.251458 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:31:00.251468 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:31:00.251479 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:31:00.251490 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:31:00.251500 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:31:00.251511 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:31:00.251522 | orchestrator | 2025-09-16 00:31:00.251533 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-16 00:31:00.251544 | orchestrator | Tuesday 16 September 2025 00:30:39 +0000 (0:00:08.118) 0:04:48.999 ***** 2025-09-16 00:31:00.251555 | orchestrator | changed: [testbed-manager] 2025-09-16 00:31:00.251565 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:31:00.251576 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:31:00.251587 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:31:00.251598 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:31:00.251608 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:31:00.251619 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:31:00.251630 | orchestrator | 2025-09-16 00:31:00.251641 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-16 00:31:00.251651 | orchestrator | Tuesday 16 September 2025 00:30:47 +0000 (0:00:07.720) 0:04:56.719 ***** 2025-09-16 00:31:00.251662 | orchestrator | ok: [testbed-manager] 2025-09-16 00:31:00.251673 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:31:00.251684 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:31:00.251695 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:31:00.251705 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:31:00.251716 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:31:00.251727 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:31:00.251757 | orchestrator | 2025-09-16 00:31:00.251768 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-16 00:31:00.251780 | orchestrator | Tuesday 16 September 2025 00:30:49 +0000 (0:00:01.887) 0:04:58.606 ***** 2025-09-16 00:31:00.251791 | orchestrator | changed: [testbed-manager] 2025-09-16 00:31:00.251802 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:31:00.251818 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:31:00.251830 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:31:00.251840 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:31:00.251851 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:31:00.251862 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:31:00.251872 | orchestrator | 2025-09-16 00:31:00.251883 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-16 00:31:00.251894 | orchestrator | Tuesday 16 September 2025 00:30:56 +0000 (0:00:06.501) 0:05:05.107 ***** 2025-09-16 00:31:00.251906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:31:00.251918 | orchestrator | 2025-09-16 00:31:00.251929 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-16 00:31:00.251940 | orchestrator | Tuesday 16 September 2025 00:30:56 +0000 (0:00:00.512) 0:05:05.620 ***** 2025-09-16 00:31:00.251951 | orchestrator | changed: [testbed-manager] 2025-09-16 00:31:00.251961 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:31:00.251972 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:31:00.251982 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:31:00.251993 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:31:00.252004 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:31:00.252015 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:31:00.252025 | orchestrator | 2025-09-16 00:31:00.252036 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-16 00:31:00.252054 | orchestrator | Tuesday 16 September 2025 00:30:57 +0000 (0:00:00.750) 0:05:06.370 ***** 2025-09-16 00:31:00.252066 | orchestrator | ok: [testbed-manager] 2025-09-16 00:31:00.252076 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:31:00.252087 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:31:00.252098 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:31:00.252109 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:31:00.252120 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:31:00.252130 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:31:00.252141 | orchestrator | 2025-09-16 00:31:00.252152 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-16 00:31:00.252163 | orchestrator | Tuesday 16 September 2025 00:30:59 +0000 (0:00:01.912) 0:05:08.283 ***** 2025-09-16 00:31:00.252174 | orchestrator | changed: [testbed-manager] 2025-09-16 00:31:00.252185 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:31:00.252195 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:31:00.252206 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:31:00.252217 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:31:00.252228 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:31:00.252238 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:31:00.252249 | orchestrator | 2025-09-16 00:31:00.252260 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-16 00:31:00.252271 | orchestrator | Tuesday 16 September 2025 00:30:59 +0000 (0:00:00.770) 0:05:09.053 ***** 2025-09-16 00:31:00.252282 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:31:00.252292 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:31:00.252303 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:31:00.252314 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:31:00.252324 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:31:00.252335 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:31:00.252346 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:31:00.252357 | orchestrator | 2025-09-16 00:31:00.252368 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-16 00:31:00.252386 | orchestrator | Tuesday 16 September 2025 00:31:00 +0000 (0:00:00.273) 0:05:09.327 ***** 2025-09-16 00:31:26.788908 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:31:26.789019 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:31:26.789035 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:31:26.789046 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:31:26.789058 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:31:26.789069 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:31:26.789080 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:31:26.789091 | orchestrator | 2025-09-16 00:31:26.789104 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-16 00:31:26.789117 | orchestrator | Tuesday 16 September 2025 00:31:00 +0000 (0:00:00.391) 0:05:09.718 ***** 2025-09-16 00:31:26.789128 | orchestrator | ok: [testbed-manager] 2025-09-16 00:31:26.789139 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:31:26.789150 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:31:26.789161 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:31:26.789171 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:31:26.789182 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:31:26.789193 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:31:26.789204 | orchestrator | 2025-09-16 00:31:26.789215 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-16 00:31:26.789227 | orchestrator | Tuesday 16 September 2025 00:31:00 +0000 (0:00:00.259) 0:05:09.978 ***** 2025-09-16 00:31:26.789238 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:31:26.789249 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:31:26.789260 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:31:26.789271 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:31:26.789282 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:31:26.789293 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:31:26.789306 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:31:26.789344 | orchestrator | 2025-09-16 00:31:26.789357 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-16 00:31:26.789370 | orchestrator | Tuesday 16 September 2025 00:31:01 +0000 (0:00:00.288) 0:05:10.266 ***** 2025-09-16 00:31:26.789383 | orchestrator | ok: [testbed-manager] 2025-09-16 00:31:26.789395 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:31:26.789407 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:31:26.789419 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:31:26.789429 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:31:26.789440 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:31:26.789451 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:31:26.789461 | orchestrator | 2025-09-16 00:31:26.789473 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-16 00:31:26.789484 | orchestrator | Tuesday 16 September 2025 00:31:01 +0000 (0:00:00.301) 0:05:10.568 ***** 2025-09-16 00:31:26.789495 | orchestrator | ok: [testbed-manager] =>  2025-09-16 00:31:26.789506 | orchestrator |  docker_version: 5:27.5.1 2025-09-16 00:31:26.789518 | orchestrator | ok: [testbed-node-3] =>  2025-09-16 00:31:26.789531 | orchestrator |  docker_version: 5:27.5.1 2025-09-16 00:31:26.789543 | orchestrator | ok: [testbed-node-4] =>  2025-09-16 00:31:26.789552 | orchestrator |  docker_version: 5:27.5.1 2025-09-16 00:31:26.789563 | orchestrator | ok: [testbed-node-5] =>  2025-09-16 00:31:26.789574 | orchestrator |  docker_version: 5:27.5.1 2025-09-16 00:31:26.789585 | orchestrator | ok: [testbed-node-0] =>  2025-09-16 00:31:26.789596 | orchestrator |  docker_version: 5:27.5.1 2025-09-16 00:31:26.789606 | orchestrator | ok: [testbed-node-1] =>  2025-09-16 00:31:26.789617 | orchestrator |  docker_version: 5:27.5.1 2025-09-16 00:31:26.789629 | orchestrator | ok: [testbed-node-2] =>  2025-09-16 00:31:26.789639 | orchestrator |  docker_version: 5:27.5.1 2025-09-16 00:31:26.789649 | orchestrator | 2025-09-16 00:31:26.789661 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-16 00:31:26.789671 | orchestrator | Tuesday 16 September 2025 00:31:01 +0000 (0:00:00.280) 0:05:10.848 ***** 2025-09-16 00:31:26.789681 | orchestrator | ok: [testbed-manager] =>  2025-09-16 00:31:26.789691 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-16 00:31:26.789701 | orchestrator | ok: [testbed-node-3] =>  2025-09-16 00:31:26.789712 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-16 00:31:26.789722 | orchestrator | ok: [testbed-node-4] =>  2025-09-16 00:31:26.789792 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-16 00:31:26.789806 | orchestrator | ok: [testbed-node-5] =>  2025-09-16 00:31:26.789817 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-16 00:31:26.789827 | orchestrator | ok: [testbed-node-0] =>  2025-09-16 00:31:26.789837 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-16 00:31:26.789846 | orchestrator | ok: [testbed-node-1] =>  2025-09-16 00:31:26.789855 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-16 00:31:26.789865 | orchestrator | ok: [testbed-node-2] =>  2025-09-16 00:31:26.789876 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-16 00:31:26.789888 | orchestrator | 2025-09-16 00:31:26.789899 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-16 00:31:26.789911 | orchestrator | Tuesday 16 September 2025 00:31:02 +0000 (0:00:00.275) 0:05:11.123 ***** 2025-09-16 00:31:26.789921 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:31:26.789932 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:31:26.789942 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:31:26.789949 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:31:26.789956 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:31:26.789962 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:31:26.789969 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:31:26.789976 | orchestrator | 2025-09-16 00:31:26.789982 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-16 00:31:26.789989 | orchestrator | Tuesday 16 September 2025 00:31:02 +0000 (0:00:00.258) 0:05:11.382 ***** 2025-09-16 00:31:26.789996 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:31:26.790052 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:31:26.790060 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:31:26.790067 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:31:26.790073 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:31:26.790080 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:31:26.790087 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:31:26.790093 | orchestrator | 2025-09-16 00:31:26.790100 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-16 00:31:26.790118 | orchestrator | Tuesday 16 September 2025 00:31:02 +0000 (0:00:00.298) 0:05:11.680 ***** 2025-09-16 00:31:26.790146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:31:26.790156 | orchestrator | 2025-09-16 00:31:26.790163 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-16 00:31:26.790170 | orchestrator | Tuesday 16 September 2025 00:31:02 +0000 (0:00:00.410) 0:05:12.091 ***** 2025-09-16 00:31:26.790176 | orchestrator | ok: [testbed-manager] 2025-09-16 00:31:26.790183 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:31:26.790190 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:31:26.790196 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:31:26.790203 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:31:26.790209 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:31:26.790216 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:31:26.790223 | orchestrator | 2025-09-16 00:31:26.790229 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-16 00:31:26.790236 | orchestrator | Tuesday 16 September 2025 00:31:03 +0000 (0:00:00.815) 0:05:12.907 ***** 2025-09-16 00:31:26.790243 | orchestrator | ok: [testbed-manager] 2025-09-16 00:31:26.790249 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:31:26.790256 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:31:26.790262 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:31:26.790268 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:31:26.790275 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:31:26.790281 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:31:26.790288 | orchestrator | 2025-09-16 00:31:26.790294 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-16 00:31:26.790303 | orchestrator | Tuesday 16 September 2025 00:31:06 +0000 (0:00:03.174) 0:05:16.081 ***** 2025-09-16 00:31:26.790310 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-16 00:31:26.790316 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-16 00:31:26.790323 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-16 00:31:26.790329 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-16 00:31:26.790336 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-16 00:31:26.790342 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-16 00:31:26.790349 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:31:26.790356 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-16 00:31:26.790362 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-16 00:31:26.790369 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:31:26.790375 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-16 00:31:26.790382 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-16 00:31:26.790392 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-16 00:31:26.790398 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-16 00:31:26.790405 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:31:26.790411 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-16 00:31:26.790418 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-16 00:31:26.790425 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-16 00:31:26.790437 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:31:26.790444 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-16 00:31:26.790450 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-16 00:31:26.790457 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-16 00:31:26.790463 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:31:26.790470 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:31:26.790477 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-16 00:31:26.790483 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-16 00:31:26.790490 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-16 00:31:26.790496 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:31:26.790503 | orchestrator | 2025-09-16 00:31:26.790509 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-16 00:31:26.790516 | orchestrator | Tuesday 16 September 2025 00:31:07 +0000 (0:00:00.576) 0:05:16.658 ***** 2025-09-16 00:31:26.790523 | orchestrator | ok: [testbed-manager] 2025-09-16 00:31:26.790529 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:31:26.790536 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:31:26.790542 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:31:26.790549 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:31:26.790555 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:31:26.790562 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:31:26.790568 | orchestrator | 2025-09-16 00:31:26.790575 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-16 00:31:26.790582 | orchestrator | Tuesday 16 September 2025 00:31:13 +0000 (0:00:06.371) 0:05:23.030 ***** 2025-09-16 00:31:26.790588 | orchestrator | ok: [testbed-manager] 2025-09-16 00:31:26.790595 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:31:26.790601 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:31:26.790608 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:31:26.790614 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:31:26.790621 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:31:26.790627 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:31:26.790633 | orchestrator | 2025-09-16 00:31:26.790640 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-16 00:31:26.790647 | orchestrator | Tuesday 16 September 2025 00:31:15 +0000 (0:00:01.194) 0:05:24.224 ***** 2025-09-16 00:31:26.790653 | orchestrator | ok: [testbed-manager] 2025-09-16 00:31:26.790660 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:31:26.790666 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:31:26.790673 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:31:26.790679 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:31:26.790686 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:31:26.790692 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:31:26.790699 | orchestrator | 2025-09-16 00:31:26.790705 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-16 00:31:26.790712 | orchestrator | Tuesday 16 September 2025 00:31:23 +0000 (0:00:08.405) 0:05:32.629 ***** 2025-09-16 00:31:26.790718 | orchestrator | changed: [testbed-manager] 2025-09-16 00:31:26.790725 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:31:26.790749 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:31:26.790762 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:08.602191 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:08.602317 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:08.602333 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:08.602345 | orchestrator | 2025-09-16 00:32:08.602358 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-16 00:32:08.602372 | orchestrator | Tuesday 16 September 2025 00:31:26 +0000 (0:00:03.240) 0:05:35.869 ***** 2025-09-16 00:32:08.602383 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:08.602395 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:08.602405 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:08.602441 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:08.602452 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:08.602463 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:08.602473 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:08.602484 | orchestrator | 2025-09-16 00:32:08.602495 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-16 00:32:08.602506 | orchestrator | Tuesday 16 September 2025 00:31:28 +0000 (0:00:01.374) 0:05:37.244 ***** 2025-09-16 00:32:08.602517 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:08.602528 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:08.602538 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:08.602549 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:08.602560 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:08.602571 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:08.602581 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:08.602592 | orchestrator | 2025-09-16 00:32:08.602603 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-16 00:32:08.602613 | orchestrator | Tuesday 16 September 2025 00:31:29 +0000 (0:00:01.314) 0:05:38.558 ***** 2025-09-16 00:32:08.602624 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:32:08.602635 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:32:08.602645 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:32:08.602656 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:32:08.602666 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:32:08.602677 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:32:08.602688 | orchestrator | changed: [testbed-manager] 2025-09-16 00:32:08.602698 | orchestrator | 2025-09-16 00:32:08.602709 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-16 00:32:08.602720 | orchestrator | Tuesday 16 September 2025 00:31:30 +0000 (0:00:00.746) 0:05:39.305 ***** 2025-09-16 00:32:08.602761 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:08.602773 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:08.602784 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:08.602810 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:08.602822 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:08.602832 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:08.602843 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:08.602854 | orchestrator | 2025-09-16 00:32:08.602865 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-16 00:32:08.602876 | orchestrator | Tuesday 16 September 2025 00:31:39 +0000 (0:00:09.608) 0:05:48.913 ***** 2025-09-16 00:32:08.602886 | orchestrator | changed: [testbed-manager] 2025-09-16 00:32:08.602897 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:08.602908 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:08.602918 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:08.602929 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:08.602939 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:08.602950 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:08.602961 | orchestrator | 2025-09-16 00:32:08.602972 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-16 00:32:08.602982 | orchestrator | Tuesday 16 September 2025 00:31:40 +0000 (0:00:00.894) 0:05:49.808 ***** 2025-09-16 00:32:08.602993 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:08.603004 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:08.603014 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:08.603025 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:08.603036 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:08.603047 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:08.603057 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:08.603068 | orchestrator | 2025-09-16 00:32:08.603079 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-16 00:32:08.603089 | orchestrator | Tuesday 16 September 2025 00:31:48 +0000 (0:00:07.610) 0:05:57.418 ***** 2025-09-16 00:32:08.603108 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:08.603119 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:08.603130 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:08.603141 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:08.603151 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:08.603162 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:08.603173 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:08.603183 | orchestrator | 2025-09-16 00:32:08.603194 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-16 00:32:08.603205 | orchestrator | Tuesday 16 September 2025 00:31:58 +0000 (0:00:10.447) 0:06:07.866 ***** 2025-09-16 00:32:08.603216 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-16 00:32:08.603228 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-16 00:32:08.603239 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-16 00:32:08.603249 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-16 00:32:08.603260 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-16 00:32:08.603271 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-16 00:32:08.603281 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-16 00:32:08.603292 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-16 00:32:08.603303 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-16 00:32:08.603314 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-16 00:32:08.603324 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-16 00:32:08.603335 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-16 00:32:08.603345 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-16 00:32:08.603357 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-16 00:32:08.603367 | orchestrator | 2025-09-16 00:32:08.603378 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-16 00:32:08.603410 | orchestrator | Tuesday 16 September 2025 00:31:59 +0000 (0:00:01.160) 0:06:09.026 ***** 2025-09-16 00:32:08.603421 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:32:08.603432 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:32:08.603443 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:32:08.603453 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:32:08.603464 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:32:08.603475 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:32:08.603486 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:32:08.603496 | orchestrator | 2025-09-16 00:32:08.603507 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-16 00:32:08.603518 | orchestrator | Tuesday 16 September 2025 00:32:00 +0000 (0:00:00.512) 0:06:09.539 ***** 2025-09-16 00:32:08.603529 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:08.603540 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:08.603550 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:08.603561 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:08.603572 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:08.603582 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:08.603593 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:08.603604 | orchestrator | 2025-09-16 00:32:08.603615 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-16 00:32:08.603627 | orchestrator | Tuesday 16 September 2025 00:32:04 +0000 (0:00:03.784) 0:06:13.323 ***** 2025-09-16 00:32:08.603638 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:32:08.603649 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:32:08.603659 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:32:08.603670 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:32:08.603681 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:32:08.603692 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:32:08.603702 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:32:08.603720 | orchestrator | 2025-09-16 00:32:08.603751 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-16 00:32:08.603763 | orchestrator | Tuesday 16 September 2025 00:32:04 +0000 (0:00:00.485) 0:06:13.809 ***** 2025-09-16 00:32:08.603774 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-16 00:32:08.603785 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-16 00:32:08.603796 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:32:08.603807 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-16 00:32:08.603823 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-16 00:32:08.603835 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:32:08.603846 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-16 00:32:08.603856 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-16 00:32:08.603867 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:32:08.603878 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-16 00:32:08.603889 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-16 00:32:08.603900 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:32:08.603911 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-16 00:32:08.603922 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-16 00:32:08.603933 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:32:08.603943 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-16 00:32:08.603954 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-16 00:32:08.603965 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:32:08.603976 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-16 00:32:08.603986 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-16 00:32:08.603997 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:32:08.604008 | orchestrator | 2025-09-16 00:32:08.604019 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-16 00:32:08.604030 | orchestrator | Tuesday 16 September 2025 00:32:05 +0000 (0:00:00.687) 0:06:14.496 ***** 2025-09-16 00:32:08.604041 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:32:08.604052 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:32:08.604063 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:32:08.604073 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:32:08.604084 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:32:08.604095 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:32:08.604106 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:32:08.604116 | orchestrator | 2025-09-16 00:32:08.604127 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-16 00:32:08.604138 | orchestrator | Tuesday 16 September 2025 00:32:05 +0000 (0:00:00.528) 0:06:15.024 ***** 2025-09-16 00:32:08.604149 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:32:08.604160 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:32:08.604170 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:32:08.604181 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:32:08.604192 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:32:08.604202 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:32:08.604213 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:32:08.604224 | orchestrator | 2025-09-16 00:32:08.604235 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-16 00:32:08.604246 | orchestrator | Tuesday 16 September 2025 00:32:06 +0000 (0:00:00.500) 0:06:15.525 ***** 2025-09-16 00:32:08.604257 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:32:08.604267 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:32:08.604278 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:32:08.604289 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:32:08.604299 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:32:08.604317 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:32:08.604328 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:32:08.604339 | orchestrator | 2025-09-16 00:32:08.604350 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-16 00:32:08.604361 | orchestrator | Tuesday 16 September 2025 00:32:06 +0000 (0:00:00.516) 0:06:16.041 ***** 2025-09-16 00:32:08.604372 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:08.604389 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:32:30.930281 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:32:30.930401 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:32:30.930418 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:32:30.930429 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:32:30.930440 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:32:30.930451 | orchestrator | 2025-09-16 00:32:30.930463 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-16 00:32:30.930476 | orchestrator | Tuesday 16 September 2025 00:32:08 +0000 (0:00:01.647) 0:06:17.688 ***** 2025-09-16 00:32:30.930488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:32:30.930501 | orchestrator | 2025-09-16 00:32:30.930512 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-16 00:32:30.930523 | orchestrator | Tuesday 16 September 2025 00:32:09 +0000 (0:00:00.998) 0:06:18.687 ***** 2025-09-16 00:32:30.930534 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:30.930545 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:30.930557 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:30.930567 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:30.930578 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:30.930589 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:30.930600 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:30.930610 | orchestrator | 2025-09-16 00:32:30.930621 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-16 00:32:30.930632 | orchestrator | Tuesday 16 September 2025 00:32:10 +0000 (0:00:00.842) 0:06:19.530 ***** 2025-09-16 00:32:30.930643 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:30.930654 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:30.930664 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:30.930675 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:30.930687 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:30.930697 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:30.930708 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:30.930719 | orchestrator | 2025-09-16 00:32:30.930779 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-16 00:32:30.930790 | orchestrator | Tuesday 16 September 2025 00:32:11 +0000 (0:00:00.903) 0:06:20.433 ***** 2025-09-16 00:32:30.930801 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:30.930813 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:30.930842 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:30.930854 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:30.930866 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:30.930878 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:30.930890 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:30.930902 | orchestrator | 2025-09-16 00:32:30.930914 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-16 00:32:30.930928 | orchestrator | Tuesday 16 September 2025 00:32:12 +0000 (0:00:01.398) 0:06:21.832 ***** 2025-09-16 00:32:30.930941 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:32:30.930953 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:32:30.930965 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:32:30.930977 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:32:30.930989 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:32:30.931001 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:32:30.931038 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:32:30.931051 | orchestrator | 2025-09-16 00:32:30.931064 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-16 00:32:30.931076 | orchestrator | Tuesday 16 September 2025 00:32:14 +0000 (0:00:01.609) 0:06:23.441 ***** 2025-09-16 00:32:30.931087 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:30.931097 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:30.931108 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:30.931119 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:30.931129 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:30.931140 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:30.931151 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:30.931161 | orchestrator | 2025-09-16 00:32:30.931172 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-16 00:32:30.931183 | orchestrator | Tuesday 16 September 2025 00:32:15 +0000 (0:00:01.374) 0:06:24.815 ***** 2025-09-16 00:32:30.931194 | orchestrator | changed: [testbed-manager] 2025-09-16 00:32:30.931204 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:30.931215 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:30.931226 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:30.931236 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:30.931247 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:30.931257 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:30.931268 | orchestrator | 2025-09-16 00:32:30.931279 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-16 00:32:30.931290 | orchestrator | Tuesday 16 September 2025 00:32:17 +0000 (0:00:01.413) 0:06:26.229 ***** 2025-09-16 00:32:30.931301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:32:30.931312 | orchestrator | 2025-09-16 00:32:30.931323 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-16 00:32:30.931334 | orchestrator | Tuesday 16 September 2025 00:32:18 +0000 (0:00:00.958) 0:06:27.187 ***** 2025-09-16 00:32:30.931344 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:30.931355 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:32:30.931367 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:32:30.931377 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:32:30.931388 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:32:30.931399 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:32:30.931409 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:32:30.931420 | orchestrator | 2025-09-16 00:32:30.931431 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-16 00:32:30.931442 | orchestrator | Tuesday 16 September 2025 00:32:19 +0000 (0:00:01.316) 0:06:28.504 ***** 2025-09-16 00:32:30.931453 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:32:30.931463 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:32:30.931493 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:32:30.931504 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:32:30.931515 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:32:30.931526 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:32:30.931536 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:30.931547 | orchestrator | 2025-09-16 00:32:30.931558 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-16 00:32:30.931569 | orchestrator | Tuesday 16 September 2025 00:32:21 +0000 (0:00:01.706) 0:06:30.210 ***** 2025-09-16 00:32:30.931580 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:30.931590 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:32:30.931601 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:32:30.931611 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:32:30.931622 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:32:30.931632 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:32:30.931643 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:32:30.931654 | orchestrator | 2025-09-16 00:32:30.931665 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-16 00:32:30.931684 | orchestrator | Tuesday 16 September 2025 00:32:22 +0000 (0:00:01.104) 0:06:31.314 ***** 2025-09-16 00:32:30.931695 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:30.931705 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:32:30.931716 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:32:30.931746 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:32:30.931757 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:32:30.931768 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:32:30.931778 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:32:30.931789 | orchestrator | 2025-09-16 00:32:30.931800 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-16 00:32:30.931811 | orchestrator | Tuesday 16 September 2025 00:32:23 +0000 (0:00:01.062) 0:06:32.376 ***** 2025-09-16 00:32:30.931822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:32:30.931833 | orchestrator | 2025-09-16 00:32:30.931844 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-16 00:32:30.931855 | orchestrator | Tuesday 16 September 2025 00:32:24 +0000 (0:00:01.024) 0:06:33.401 ***** 2025-09-16 00:32:30.931865 | orchestrator | 2025-09-16 00:32:30.931876 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-16 00:32:30.931887 | orchestrator | Tuesday 16 September 2025 00:32:24 +0000 (0:00:00.039) 0:06:33.440 ***** 2025-09-16 00:32:30.931898 | orchestrator | 2025-09-16 00:32:30.931909 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-16 00:32:30.931920 | orchestrator | Tuesday 16 September 2025 00:32:24 +0000 (0:00:00.038) 0:06:33.478 ***** 2025-09-16 00:32:30.931931 | orchestrator | 2025-09-16 00:32:30.931941 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-16 00:32:30.931952 | orchestrator | Tuesday 16 September 2025 00:32:24 +0000 (0:00:00.045) 0:06:33.524 ***** 2025-09-16 00:32:30.931963 | orchestrator | 2025-09-16 00:32:30.931974 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-16 00:32:30.931992 | orchestrator | Tuesday 16 September 2025 00:32:24 +0000 (0:00:00.038) 0:06:33.562 ***** 2025-09-16 00:32:30.932004 | orchestrator | 2025-09-16 00:32:30.932015 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-16 00:32:30.932026 | orchestrator | Tuesday 16 September 2025 00:32:24 +0000 (0:00:00.038) 0:06:33.601 ***** 2025-09-16 00:32:30.932037 | orchestrator | 2025-09-16 00:32:30.932047 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-16 00:32:30.932058 | orchestrator | Tuesday 16 September 2025 00:32:24 +0000 (0:00:00.057) 0:06:33.658 ***** 2025-09-16 00:32:30.932069 | orchestrator | 2025-09-16 00:32:30.932079 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-16 00:32:30.932090 | orchestrator | Tuesday 16 September 2025 00:32:24 +0000 (0:00:00.038) 0:06:33.696 ***** 2025-09-16 00:32:30.932101 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:32:30.932112 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:32:30.932122 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:32:30.932133 | orchestrator | 2025-09-16 00:32:30.932144 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-16 00:32:30.932155 | orchestrator | Tuesday 16 September 2025 00:32:25 +0000 (0:00:01.108) 0:06:34.805 ***** 2025-09-16 00:32:30.932166 | orchestrator | changed: [testbed-manager] 2025-09-16 00:32:30.932177 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:30.932188 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:30.932199 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:30.932209 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:30.932220 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:30.932230 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:30.932241 | orchestrator | 2025-09-16 00:32:30.932252 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-16 00:32:30.932270 | orchestrator | Tuesday 16 September 2025 00:32:27 +0000 (0:00:01.374) 0:06:36.179 ***** 2025-09-16 00:32:30.932281 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:32:30.932291 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:30.932302 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:30.932313 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:30.932323 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:30.932334 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:30.932345 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:30.932355 | orchestrator | 2025-09-16 00:32:30.932366 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-16 00:32:30.932377 | orchestrator | Tuesday 16 September 2025 00:32:29 +0000 (0:00:02.707) 0:06:38.887 ***** 2025-09-16 00:32:30.932388 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:32:30.932399 | orchestrator | 2025-09-16 00:32:30.932410 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-16 00:32:30.932420 | orchestrator | Tuesday 16 September 2025 00:32:29 +0000 (0:00:00.104) 0:06:38.992 ***** 2025-09-16 00:32:30.932431 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:30.932442 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:30.932453 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:30.932464 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:30.932481 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:55.915546 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:55.915661 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:55.915676 | orchestrator | 2025-09-16 00:32:55.915689 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-16 00:32:55.915702 | orchestrator | Tuesday 16 September 2025 00:32:30 +0000 (0:00:01.020) 0:06:40.012 ***** 2025-09-16 00:32:55.915715 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:32:55.915771 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:32:55.915783 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:32:55.915794 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:32:55.915805 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:32:55.915816 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:32:55.915827 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:32:55.915838 | orchestrator | 2025-09-16 00:32:55.915850 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-16 00:32:55.915861 | orchestrator | Tuesday 16 September 2025 00:32:31 +0000 (0:00:00.515) 0:06:40.528 ***** 2025-09-16 00:32:55.915873 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:32:55.915887 | orchestrator | 2025-09-16 00:32:55.915898 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-16 00:32:55.915910 | orchestrator | Tuesday 16 September 2025 00:32:32 +0000 (0:00:01.007) 0:06:41.535 ***** 2025-09-16 00:32:55.915921 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:55.915933 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:32:55.915944 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:32:55.915955 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:32:55.915966 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:32:55.915977 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:32:55.915988 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:32:55.915999 | orchestrator | 2025-09-16 00:32:55.916010 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-16 00:32:55.916021 | orchestrator | Tuesday 16 September 2025 00:32:33 +0000 (0:00:00.812) 0:06:42.347 ***** 2025-09-16 00:32:55.916032 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-16 00:32:55.916044 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-16 00:32:55.916055 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-16 00:32:55.916104 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-16 00:32:55.916119 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-16 00:32:55.916131 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-16 00:32:55.916143 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-16 00:32:55.916156 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-16 00:32:55.916168 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-16 00:32:55.916180 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-16 00:32:55.916193 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-16 00:32:55.916205 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-16 00:32:55.916217 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-16 00:32:55.916230 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-16 00:32:55.916242 | orchestrator | 2025-09-16 00:32:55.916255 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-16 00:32:55.916267 | orchestrator | Tuesday 16 September 2025 00:32:35 +0000 (0:00:02.484) 0:06:44.832 ***** 2025-09-16 00:32:55.916279 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:32:55.916292 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:32:55.916304 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:32:55.916316 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:32:55.916328 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:32:55.916340 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:32:55.916353 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:32:55.916365 | orchestrator | 2025-09-16 00:32:55.916378 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-16 00:32:55.916391 | orchestrator | Tuesday 16 September 2025 00:32:36 +0000 (0:00:00.485) 0:06:45.317 ***** 2025-09-16 00:32:55.916406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:32:55.916419 | orchestrator | 2025-09-16 00:32:55.916431 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-16 00:32:55.916444 | orchestrator | Tuesday 16 September 2025 00:32:37 +0000 (0:00:00.985) 0:06:46.302 ***** 2025-09-16 00:32:55.916455 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:55.916466 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:32:55.916476 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:32:55.916487 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:32:55.916498 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:32:55.916509 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:32:55.916520 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:32:55.916530 | orchestrator | 2025-09-16 00:32:55.916541 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-16 00:32:55.916552 | orchestrator | Tuesday 16 September 2025 00:32:38 +0000 (0:00:00.833) 0:06:47.136 ***** 2025-09-16 00:32:55.916564 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:55.916574 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:32:55.916585 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:32:55.916596 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:32:55.916607 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:32:55.916617 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:32:55.916628 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:32:55.916639 | orchestrator | 2025-09-16 00:32:55.916650 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-16 00:32:55.916679 | orchestrator | Tuesday 16 September 2025 00:32:38 +0000 (0:00:00.810) 0:06:47.947 ***** 2025-09-16 00:32:55.916691 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:32:55.916702 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:32:55.916712 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:32:55.916769 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:32:55.916781 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:32:55.916792 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:32:55.916803 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:32:55.916814 | orchestrator | 2025-09-16 00:32:55.916824 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-16 00:32:55.916835 | orchestrator | Tuesday 16 September 2025 00:32:39 +0000 (0:00:00.465) 0:06:48.412 ***** 2025-09-16 00:32:55.916846 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:55.916857 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:32:55.916868 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:32:55.916878 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:32:55.916889 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:32:55.916900 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:32:55.916911 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:32:55.916921 | orchestrator | 2025-09-16 00:32:55.916932 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-16 00:32:55.916943 | orchestrator | Tuesday 16 September 2025 00:32:40 +0000 (0:00:01.532) 0:06:49.945 ***** 2025-09-16 00:32:55.916954 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:32:55.916965 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:32:55.916976 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:32:55.916986 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:32:55.916997 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:32:55.917008 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:32:55.917018 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:32:55.917029 | orchestrator | 2025-09-16 00:32:55.917040 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-16 00:32:55.917051 | orchestrator | Tuesday 16 September 2025 00:32:41 +0000 (0:00:00.484) 0:06:50.429 ***** 2025-09-16 00:32:55.917062 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:55.917073 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:55.917083 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:55.917094 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:55.917105 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:55.917115 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:55.917126 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:55.917137 | orchestrator | 2025-09-16 00:32:55.917148 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-16 00:32:55.917164 | orchestrator | Tuesday 16 September 2025 00:32:48 +0000 (0:00:07.357) 0:06:57.787 ***** 2025-09-16 00:32:55.917176 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:55.917186 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:55.917197 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:55.917208 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:55.917219 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:55.917229 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:55.917240 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:55.917251 | orchestrator | 2025-09-16 00:32:55.917262 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-16 00:32:55.917273 | orchestrator | Tuesday 16 September 2025 00:32:49 +0000 (0:00:01.281) 0:06:59.069 ***** 2025-09-16 00:32:55.917284 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:55.917294 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:55.917305 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:55.917316 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:55.917326 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:55.917337 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:55.917348 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:55.917358 | orchestrator | 2025-09-16 00:32:55.917369 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-16 00:32:55.917380 | orchestrator | Tuesday 16 September 2025 00:32:51 +0000 (0:00:01.734) 0:07:00.803 ***** 2025-09-16 00:32:55.917391 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:55.917410 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:32:55.917421 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:32:55.917432 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:32:55.917443 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:32:55.917453 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:32:55.917464 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:32:55.917475 | orchestrator | 2025-09-16 00:32:55.917486 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-16 00:32:55.917497 | orchestrator | Tuesday 16 September 2025 00:32:53 +0000 (0:00:01.919) 0:07:02.722 ***** 2025-09-16 00:32:55.917508 | orchestrator | ok: [testbed-manager] 2025-09-16 00:32:55.917519 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:32:55.917530 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:32:55.917540 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:32:55.917551 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:32:55.917562 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:32:55.917573 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:32:55.917584 | orchestrator | 2025-09-16 00:32:55.917595 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-16 00:32:55.917606 | orchestrator | Tuesday 16 September 2025 00:32:54 +0000 (0:00:00.844) 0:07:03.566 ***** 2025-09-16 00:32:55.917617 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:32:55.917627 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:32:55.917638 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:32:55.917649 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:32:55.917660 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:32:55.917670 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:32:55.917681 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:32:55.917692 | orchestrator | 2025-09-16 00:32:55.917703 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-16 00:32:55.917714 | orchestrator | Tuesday 16 September 2025 00:32:55 +0000 (0:00:00.930) 0:07:04.497 ***** 2025-09-16 00:32:55.917752 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:32:55.917763 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:32:55.917774 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:32:55.917785 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:32:55.917796 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:32:55.917806 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:32:55.917817 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:32:55.917828 | orchestrator | 2025-09-16 00:32:55.917845 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-16 00:33:27.552202 | orchestrator | Tuesday 16 September 2025 00:32:55 +0000 (0:00:00.500) 0:07:04.998 ***** 2025-09-16 00:33:27.552320 | orchestrator | ok: [testbed-manager] 2025-09-16 00:33:27.552338 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:33:27.552350 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:33:27.552361 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:33:27.552372 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:33:27.552383 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:33:27.552395 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:33:27.552406 | orchestrator | 2025-09-16 00:33:27.552417 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-16 00:33:27.552429 | orchestrator | Tuesday 16 September 2025 00:32:56 +0000 (0:00:00.495) 0:07:05.494 ***** 2025-09-16 00:33:27.552440 | orchestrator | ok: [testbed-manager] 2025-09-16 00:33:27.552450 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:33:27.552461 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:33:27.552471 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:33:27.552482 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:33:27.552493 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:33:27.552503 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:33:27.552514 | orchestrator | 2025-09-16 00:33:27.552525 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-16 00:33:27.552536 | orchestrator | Tuesday 16 September 2025 00:32:56 +0000 (0:00:00.503) 0:07:05.997 ***** 2025-09-16 00:33:27.552575 | orchestrator | ok: [testbed-manager] 2025-09-16 00:33:27.552586 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:33:27.552597 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:33:27.552607 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:33:27.552617 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:33:27.552628 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:33:27.552638 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:33:27.552649 | orchestrator | 2025-09-16 00:33:27.552659 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-16 00:33:27.552670 | orchestrator | Tuesday 16 September 2025 00:32:57 +0000 (0:00:00.481) 0:07:06.479 ***** 2025-09-16 00:33:27.552681 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:33:27.552691 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:33:27.552701 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:33:27.552712 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:33:27.552760 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:33:27.552773 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:33:27.552785 | orchestrator | ok: [testbed-manager] 2025-09-16 00:33:27.552797 | orchestrator | 2025-09-16 00:33:27.552809 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-16 00:33:27.552835 | orchestrator | Tuesday 16 September 2025 00:33:03 +0000 (0:00:05.690) 0:07:12.170 ***** 2025-09-16 00:33:27.552848 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:33:27.552862 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:33:27.552873 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:33:27.552885 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:33:27.552898 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:33:27.552910 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:33:27.552922 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:33:27.552935 | orchestrator | 2025-09-16 00:33:27.552947 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-16 00:33:27.552959 | orchestrator | Tuesday 16 September 2025 00:33:03 +0000 (0:00:00.506) 0:07:12.676 ***** 2025-09-16 00:33:27.552973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:33:27.552989 | orchestrator | 2025-09-16 00:33:27.553001 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-16 00:33:27.553013 | orchestrator | Tuesday 16 September 2025 00:33:04 +0000 (0:00:00.780) 0:07:13.457 ***** 2025-09-16 00:33:27.553025 | orchestrator | ok: [testbed-manager] 2025-09-16 00:33:27.553038 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:33:27.553049 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:33:27.553061 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:33:27.553074 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:33:27.553087 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:33:27.553099 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:33:27.553112 | orchestrator | 2025-09-16 00:33:27.553124 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-16 00:33:27.553135 | orchestrator | Tuesday 16 September 2025 00:33:06 +0000 (0:00:01.979) 0:07:15.436 ***** 2025-09-16 00:33:27.553146 | orchestrator | ok: [testbed-manager] 2025-09-16 00:33:27.553157 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:33:27.553167 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:33:27.553178 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:33:27.553188 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:33:27.553198 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:33:27.553209 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:33:27.553219 | orchestrator | 2025-09-16 00:33:27.553231 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-16 00:33:27.553241 | orchestrator | Tuesday 16 September 2025 00:33:07 +0000 (0:00:01.078) 0:07:16.514 ***** 2025-09-16 00:33:27.553252 | orchestrator | ok: [testbed-manager] 2025-09-16 00:33:27.553262 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:33:27.553282 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:33:27.553293 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:33:27.553303 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:33:27.553314 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:33:27.553324 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:33:27.553335 | orchestrator | 2025-09-16 00:33:27.553346 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-16 00:33:27.553357 | orchestrator | Tuesday 16 September 2025 00:33:08 +0000 (0:00:00.813) 0:07:17.328 ***** 2025-09-16 00:33:27.553368 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-16 00:33:27.553380 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-16 00:33:27.553391 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-16 00:33:27.553420 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-16 00:33:27.553431 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-16 00:33:27.553442 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-16 00:33:27.553453 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-16 00:33:27.553464 | orchestrator | 2025-09-16 00:33:27.553475 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-16 00:33:27.553486 | orchestrator | Tuesday 16 September 2025 00:33:09 +0000 (0:00:01.658) 0:07:18.986 ***** 2025-09-16 00:33:27.553497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:33:27.553508 | orchestrator | 2025-09-16 00:33:27.553519 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-16 00:33:27.553529 | orchestrator | Tuesday 16 September 2025 00:33:10 +0000 (0:00:00.969) 0:07:19.956 ***** 2025-09-16 00:33:27.553540 | orchestrator | changed: [testbed-manager] 2025-09-16 00:33:27.553551 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:33:27.553562 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:33:27.553572 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:33:27.553583 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:33:27.553594 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:33:27.553604 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:33:27.553615 | orchestrator | 2025-09-16 00:33:27.553626 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-16 00:33:27.553636 | orchestrator | Tuesday 16 September 2025 00:33:19 +0000 (0:00:08.902) 0:07:28.859 ***** 2025-09-16 00:33:27.553647 | orchestrator | ok: [testbed-manager] 2025-09-16 00:33:27.553663 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:33:27.553674 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:33:27.553685 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:33:27.553696 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:33:27.553707 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:33:27.553717 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:33:27.553747 | orchestrator | 2025-09-16 00:33:27.553758 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-16 00:33:27.553769 | orchestrator | Tuesday 16 September 2025 00:33:21 +0000 (0:00:01.812) 0:07:30.671 ***** 2025-09-16 00:33:27.553779 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:33:27.553790 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:33:27.553808 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:33:27.553818 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:33:27.553829 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:33:27.553839 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:33:27.553850 | orchestrator | 2025-09-16 00:33:27.553861 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-16 00:33:27.553872 | orchestrator | Tuesday 16 September 2025 00:33:22 +0000 (0:00:01.304) 0:07:31.976 ***** 2025-09-16 00:33:27.553882 | orchestrator | changed: [testbed-manager] 2025-09-16 00:33:27.553893 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:33:27.553904 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:33:27.553914 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:33:27.553925 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:33:27.553936 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:33:27.553946 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:33:27.553957 | orchestrator | 2025-09-16 00:33:27.553968 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-16 00:33:27.553978 | orchestrator | 2025-09-16 00:33:27.553989 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-16 00:33:27.554000 | orchestrator | Tuesday 16 September 2025 00:33:24 +0000 (0:00:01.217) 0:07:33.193 ***** 2025-09-16 00:33:27.554011 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:33:27.554079 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:33:27.554091 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:33:27.554102 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:33:27.554112 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:33:27.554123 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:33:27.554134 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:33:27.554144 | orchestrator | 2025-09-16 00:33:27.554155 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-16 00:33:27.554166 | orchestrator | 2025-09-16 00:33:27.554177 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-16 00:33:27.554187 | orchestrator | Tuesday 16 September 2025 00:33:24 +0000 (0:00:00.453) 0:07:33.647 ***** 2025-09-16 00:33:27.554198 | orchestrator | changed: [testbed-manager] 2025-09-16 00:33:27.554209 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:33:27.554219 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:33:27.554230 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:33:27.554241 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:33:27.554251 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:33:27.554262 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:33:27.554273 | orchestrator | 2025-09-16 00:33:27.554283 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-16 00:33:27.554294 | orchestrator | Tuesday 16 September 2025 00:33:25 +0000 (0:00:01.303) 0:07:34.950 ***** 2025-09-16 00:33:27.554305 | orchestrator | ok: [testbed-manager] 2025-09-16 00:33:27.554316 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:33:27.554326 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:33:27.554337 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:33:27.554348 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:33:27.554358 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:33:27.554369 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:33:27.554379 | orchestrator | 2025-09-16 00:33:27.554390 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-16 00:33:27.554408 | orchestrator | Tuesday 16 September 2025 00:33:27 +0000 (0:00:01.680) 0:07:36.630 ***** 2025-09-16 00:33:49.482334 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:33:49.482446 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:33:49.482462 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:33:49.482476 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:33:49.482487 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:33:49.482498 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:33:49.482509 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:33:49.482520 | orchestrator | 2025-09-16 00:33:49.482559 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-16 00:33:49.482572 | orchestrator | Tuesday 16 September 2025 00:33:28 +0000 (0:00:00.469) 0:07:37.100 ***** 2025-09-16 00:33:49.482584 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:33:49.482596 | orchestrator | 2025-09-16 00:33:49.482607 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-16 00:33:49.482618 | orchestrator | Tuesday 16 September 2025 00:33:28 +0000 (0:00:00.927) 0:07:38.027 ***** 2025-09-16 00:33:49.482630 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:33:49.482643 | orchestrator | 2025-09-16 00:33:49.482655 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-16 00:33:49.482666 | orchestrator | Tuesday 16 September 2025 00:33:29 +0000 (0:00:00.743) 0:07:38.771 ***** 2025-09-16 00:33:49.482676 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:33:49.482687 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:33:49.482697 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:33:49.482708 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:33:49.482718 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:33:49.482790 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:33:49.482801 | orchestrator | changed: [testbed-manager] 2025-09-16 00:33:49.482812 | orchestrator | 2025-09-16 00:33:49.482823 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-16 00:33:49.482834 | orchestrator | Tuesday 16 September 2025 00:33:37 +0000 (0:00:07.485) 0:07:46.256 ***** 2025-09-16 00:33:49.482845 | orchestrator | changed: [testbed-manager] 2025-09-16 00:33:49.482856 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:33:49.482867 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:33:49.482879 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:33:49.482892 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:33:49.482904 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:33:49.482916 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:33:49.482927 | orchestrator | 2025-09-16 00:33:49.482940 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-16 00:33:49.482952 | orchestrator | Tuesday 16 September 2025 00:33:37 +0000 (0:00:00.799) 0:07:47.055 ***** 2025-09-16 00:33:49.482964 | orchestrator | changed: [testbed-manager] 2025-09-16 00:33:49.482977 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:33:49.482989 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:33:49.483001 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:33:49.483013 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:33:49.483025 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:33:49.483037 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:33:49.483048 | orchestrator | 2025-09-16 00:33:49.483060 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-16 00:33:49.483073 | orchestrator | Tuesday 16 September 2025 00:33:39 +0000 (0:00:01.483) 0:07:48.539 ***** 2025-09-16 00:33:49.483085 | orchestrator | changed: [testbed-manager] 2025-09-16 00:33:49.483097 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:33:49.483109 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:33:49.483121 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:33:49.483133 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:33:49.483192 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:33:49.483206 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:33:49.483220 | orchestrator | 2025-09-16 00:33:49.483231 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-16 00:33:49.483241 | orchestrator | Tuesday 16 September 2025 00:33:41 +0000 (0:00:01.706) 0:07:50.246 ***** 2025-09-16 00:33:49.483252 | orchestrator | changed: [testbed-manager] 2025-09-16 00:33:49.483272 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:33:49.483283 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:33:49.483293 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:33:49.483304 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:33:49.483315 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:33:49.483325 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:33:49.483336 | orchestrator | 2025-09-16 00:33:49.483347 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-16 00:33:49.483357 | orchestrator | Tuesday 16 September 2025 00:33:42 +0000 (0:00:01.205) 0:07:51.451 ***** 2025-09-16 00:33:49.483368 | orchestrator | changed: [testbed-manager] 2025-09-16 00:33:49.483379 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:33:49.483389 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:33:49.483400 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:33:49.483411 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:33:49.483421 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:33:49.483432 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:33:49.483443 | orchestrator | 2025-09-16 00:33:49.483454 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-16 00:33:49.483465 | orchestrator | 2025-09-16 00:33:49.483476 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-16 00:33:49.483487 | orchestrator | Tuesday 16 September 2025 00:33:43 +0000 (0:00:01.422) 0:07:52.873 ***** 2025-09-16 00:33:49.483498 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:33:49.483509 | orchestrator | 2025-09-16 00:33:49.483520 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-16 00:33:49.483548 | orchestrator | Tuesday 16 September 2025 00:33:44 +0000 (0:00:00.781) 0:07:53.655 ***** 2025-09-16 00:33:49.483559 | orchestrator | ok: [testbed-manager] 2025-09-16 00:33:49.483571 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:33:49.483582 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:33:49.483593 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:33:49.483603 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:33:49.483614 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:33:49.483625 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:33:49.483635 | orchestrator | 2025-09-16 00:33:49.483646 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-16 00:33:49.483657 | orchestrator | Tuesday 16 September 2025 00:33:45 +0000 (0:00:00.800) 0:07:54.455 ***** 2025-09-16 00:33:49.483668 | orchestrator | changed: [testbed-manager] 2025-09-16 00:33:49.483679 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:33:49.483689 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:33:49.483700 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:33:49.483710 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:33:49.483721 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:33:49.483752 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:33:49.483763 | orchestrator | 2025-09-16 00:33:49.483773 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-16 00:33:49.483784 | orchestrator | Tuesday 16 September 2025 00:33:46 +0000 (0:00:01.228) 0:07:55.684 ***** 2025-09-16 00:33:49.483795 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:33:49.483806 | orchestrator | 2025-09-16 00:33:49.483816 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-16 00:33:49.483827 | orchestrator | Tuesday 16 September 2025 00:33:47 +0000 (0:00:00.803) 0:07:56.487 ***** 2025-09-16 00:33:49.483838 | orchestrator | ok: [testbed-manager] 2025-09-16 00:33:49.483849 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:33:49.483859 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:33:49.483870 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:33:49.483881 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:33:49.483898 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:33:49.483909 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:33:49.483920 | orchestrator | 2025-09-16 00:33:49.483931 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-16 00:33:49.483941 | orchestrator | Tuesday 16 September 2025 00:33:48 +0000 (0:00:00.804) 0:07:57.292 ***** 2025-09-16 00:33:49.483958 | orchestrator | changed: [testbed-manager] 2025-09-16 00:33:49.483969 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:33:49.483980 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:33:49.483990 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:33:49.484001 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:33:49.484012 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:33:49.484022 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:33:49.484033 | orchestrator | 2025-09-16 00:33:49.484043 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:33:49.484055 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-16 00:33:49.484067 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-16 00:33:49.484078 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-16 00:33:49.484089 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-16 00:33:49.484100 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-16 00:33:49.484110 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-16 00:33:49.484121 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-16 00:33:49.484132 | orchestrator | 2025-09-16 00:33:49.484143 | orchestrator | 2025-09-16 00:33:49.484153 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:33:49.484164 | orchestrator | Tuesday 16 September 2025 00:33:49 +0000 (0:00:01.258) 0:07:58.551 ***** 2025-09-16 00:33:49.484175 | orchestrator | =============================================================================== 2025-09-16 00:33:49.484186 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.98s 2025-09-16 00:33:49.484197 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.27s 2025-09-16 00:33:49.484208 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.59s 2025-09-16 00:33:49.484218 | orchestrator | osism.commons.repository : Update package cache ------------------------ 19.65s 2025-09-16 00:33:49.484229 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.38s 2025-09-16 00:33:49.484240 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.13s 2025-09-16 00:33:49.484252 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.45s 2025-09-16 00:33:49.484262 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.61s 2025-09-16 00:33:49.484273 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.90s 2025-09-16 00:33:49.484284 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.67s 2025-09-16 00:33:49.484301 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.41s 2025-09-16 00:33:49.806811 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.12s 2025-09-16 00:33:49.806899 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.72s 2025-09-16 00:33:49.806936 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 7.61s 2025-09-16 00:33:49.806948 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 7.49s 2025-09-16 00:33:49.806959 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.36s 2025-09-16 00:33:49.806969 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.50s 2025-09-16 00:33:49.806980 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.37s 2025-09-16 00:33:49.806991 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.69s 2025-09-16 00:33:49.807002 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.40s 2025-09-16 00:33:50.090120 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-16 00:33:50.090202 | orchestrator | + osism apply network 2025-09-16 00:34:02.612942 | orchestrator | 2025-09-16 00:34:02 | INFO  | Task 2f54613f-91a7-40c6-8f28-9f980afb9314 (network) was prepared for execution. 2025-09-16 00:34:02.613057 | orchestrator | 2025-09-16 00:34:02 | INFO  | It takes a moment until task 2f54613f-91a7-40c6-8f28-9f980afb9314 (network) has been started and output is visible here. 2025-09-16 00:34:29.823499 | orchestrator | 2025-09-16 00:34:29.823612 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-16 00:34:29.823630 | orchestrator | 2025-09-16 00:34:29.823642 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-16 00:34:29.823654 | orchestrator | Tuesday 16 September 2025 00:34:06 +0000 (0:00:00.265) 0:00:00.265 ***** 2025-09-16 00:34:29.823665 | orchestrator | ok: [testbed-manager] 2025-09-16 00:34:29.823677 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:34:29.823688 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:34:29.823700 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:34:29.823711 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:34:29.823722 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:34:29.823787 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:34:29.823799 | orchestrator | 2025-09-16 00:34:29.823811 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-16 00:34:29.823822 | orchestrator | Tuesday 16 September 2025 00:34:07 +0000 (0:00:00.693) 0:00:00.959 ***** 2025-09-16 00:34:29.823835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:34:29.823849 | orchestrator | 2025-09-16 00:34:29.823860 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-16 00:34:29.823871 | orchestrator | Tuesday 16 September 2025 00:34:08 +0000 (0:00:01.154) 0:00:02.113 ***** 2025-09-16 00:34:29.823882 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:34:29.823893 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:34:29.823903 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:34:29.823914 | orchestrator | ok: [testbed-manager] 2025-09-16 00:34:29.823925 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:34:29.823936 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:34:29.823946 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:34:29.823957 | orchestrator | 2025-09-16 00:34:29.823968 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-16 00:34:29.823979 | orchestrator | Tuesday 16 September 2025 00:34:10 +0000 (0:00:01.667) 0:00:03.781 ***** 2025-09-16 00:34:29.823990 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:34:29.824001 | orchestrator | ok: [testbed-manager] 2025-09-16 00:34:29.824013 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:34:29.824025 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:34:29.824038 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:34:29.824050 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:34:29.824062 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:34:29.824074 | orchestrator | 2025-09-16 00:34:29.824086 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-16 00:34:29.824125 | orchestrator | Tuesday 16 September 2025 00:34:11 +0000 (0:00:01.527) 0:00:05.308 ***** 2025-09-16 00:34:29.824139 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-16 00:34:29.824152 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-16 00:34:29.824164 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-16 00:34:29.824176 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-16 00:34:29.824188 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-16 00:34:29.824200 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-16 00:34:29.824212 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-16 00:34:29.824224 | orchestrator | 2025-09-16 00:34:29.824236 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-16 00:34:29.824248 | orchestrator | Tuesday 16 September 2025 00:34:12 +0000 (0:00:00.916) 0:00:06.225 ***** 2025-09-16 00:34:29.824261 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-16 00:34:29.824274 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-16 00:34:29.824287 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-16 00:34:29.824299 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-16 00:34:29.824311 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-16 00:34:29.824321 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-16 00:34:29.824332 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-16 00:34:29.824342 | orchestrator | 2025-09-16 00:34:29.824353 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-16 00:34:29.824364 | orchestrator | Tuesday 16 September 2025 00:34:16 +0000 (0:00:03.321) 0:00:09.547 ***** 2025-09-16 00:34:29.824375 | orchestrator | changed: [testbed-manager] 2025-09-16 00:34:29.824386 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:34:29.824396 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:34:29.824407 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:34:29.824418 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:34:29.824428 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:34:29.824439 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:34:29.824449 | orchestrator | 2025-09-16 00:34:29.824460 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-16 00:34:29.824471 | orchestrator | Tuesday 16 September 2025 00:34:17 +0000 (0:00:01.385) 0:00:10.932 ***** 2025-09-16 00:34:29.824482 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-16 00:34:29.824492 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-16 00:34:29.824503 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-16 00:34:29.824513 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-16 00:34:29.824524 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-16 00:34:29.824534 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-16 00:34:29.824545 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-16 00:34:29.824555 | orchestrator | 2025-09-16 00:34:29.824566 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-16 00:34:29.824577 | orchestrator | Tuesday 16 September 2025 00:34:19 +0000 (0:00:01.816) 0:00:12.748 ***** 2025-09-16 00:34:29.824588 | orchestrator | ok: [testbed-manager] 2025-09-16 00:34:29.824598 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:34:29.824609 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:34:29.824620 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:34:29.824630 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:34:29.824641 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:34:29.824651 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:34:29.824662 | orchestrator | 2025-09-16 00:34:29.824673 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-16 00:34:29.824700 | orchestrator | Tuesday 16 September 2025 00:34:20 +0000 (0:00:01.060) 0:00:13.808 ***** 2025-09-16 00:34:29.824712 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:34:29.824723 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:34:29.824752 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:34:29.824772 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:34:29.824783 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:34:29.824794 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:34:29.824805 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:34:29.824815 | orchestrator | 2025-09-16 00:34:29.824826 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-16 00:34:29.824852 | orchestrator | Tuesday 16 September 2025 00:34:20 +0000 (0:00:00.632) 0:00:14.441 ***** 2025-09-16 00:34:29.824864 | orchestrator | ok: [testbed-manager] 2025-09-16 00:34:29.824875 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:34:29.824886 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:34:29.824896 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:34:29.824907 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:34:29.824918 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:34:29.824929 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:34:29.824939 | orchestrator | 2025-09-16 00:34:29.824951 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-16 00:34:29.824962 | orchestrator | Tuesday 16 September 2025 00:34:23 +0000 (0:00:02.254) 0:00:16.696 ***** 2025-09-16 00:34:29.824973 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:34:29.824984 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:34:29.824995 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:34:29.825005 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:34:29.825016 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:34:29.825027 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:34:29.825039 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-16 00:34:29.825051 | orchestrator | 2025-09-16 00:34:29.825062 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-16 00:34:29.825073 | orchestrator | Tuesday 16 September 2025 00:34:24 +0000 (0:00:00.851) 0:00:17.547 ***** 2025-09-16 00:34:29.825084 | orchestrator | ok: [testbed-manager] 2025-09-16 00:34:29.825095 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:34:29.825106 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:34:29.825117 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:34:29.825127 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:34:29.825138 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:34:29.825149 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:34:29.825160 | orchestrator | 2025-09-16 00:34:29.825171 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-16 00:34:29.825182 | orchestrator | Tuesday 16 September 2025 00:34:25 +0000 (0:00:01.595) 0:00:19.142 ***** 2025-09-16 00:34:29.825193 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:34:29.825206 | orchestrator | 2025-09-16 00:34:29.825217 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-16 00:34:29.825228 | orchestrator | Tuesday 16 September 2025 00:34:26 +0000 (0:00:01.307) 0:00:20.450 ***** 2025-09-16 00:34:29.825238 | orchestrator | ok: [testbed-manager] 2025-09-16 00:34:29.825249 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:34:29.825260 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:34:29.825271 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:34:29.825282 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:34:29.825293 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:34:29.825304 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:34:29.825314 | orchestrator | 2025-09-16 00:34:29.825325 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-16 00:34:29.825336 | orchestrator | Tuesday 16 September 2025 00:34:27 +0000 (0:00:00.963) 0:00:21.414 ***** 2025-09-16 00:34:29.825347 | orchestrator | ok: [testbed-manager] 2025-09-16 00:34:29.825358 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:34:29.825369 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:34:29.825386 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:34:29.825397 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:34:29.825408 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:34:29.825419 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:34:29.825430 | orchestrator | 2025-09-16 00:34:29.825441 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-16 00:34:29.825452 | orchestrator | Tuesday 16 September 2025 00:34:28 +0000 (0:00:00.769) 0:00:22.183 ***** 2025-09-16 00:34:29.825463 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-16 00:34:29.825474 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-16 00:34:29.825485 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-16 00:34:29.825496 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-16 00:34:29.825507 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-16 00:34:29.825517 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-16 00:34:29.825528 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-16 00:34:29.825539 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-16 00:34:29.825550 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-16 00:34:29.825561 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-16 00:34:29.825572 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-16 00:34:29.825583 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-16 00:34:29.825594 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-16 00:34:29.825605 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-16 00:34:29.825616 | orchestrator | 2025-09-16 00:34:29.825634 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-16 00:34:44.666279 | orchestrator | Tuesday 16 September 2025 00:34:29 +0000 (0:00:01.149) 0:00:23.333 ***** 2025-09-16 00:34:44.666394 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:34:44.666412 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:34:44.666424 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:34:44.666435 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:34:44.666446 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:34:44.666457 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:34:44.666468 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:34:44.666480 | orchestrator | 2025-09-16 00:34:44.666507 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-16 00:34:44.666520 | orchestrator | Tuesday 16 September 2025 00:34:30 +0000 (0:00:00.618) 0:00:23.951 ***** 2025-09-16 00:34:44.666533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-2, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:34:44.666547 | orchestrator | 2025-09-16 00:34:44.666558 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-16 00:34:44.666569 | orchestrator | Tuesday 16 September 2025 00:34:34 +0000 (0:00:04.334) 0:00:28.285 ***** 2025-09-16 00:34:44.666581 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-16 00:34:44.666595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-16 00:34:44.666607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-16 00:34:44.666645 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-16 00:34:44.666657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-16 00:34:44.666668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-16 00:34:44.666679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-16 00:34:44.666690 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-16 00:34:44.666701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-16 00:34:44.666719 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-16 00:34:44.666731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-16 00:34:44.666803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-16 00:34:44.666817 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-16 00:34:44.666836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-16 00:34:44.666848 | orchestrator | 2025-09-16 00:34:44.666861 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-16 00:34:44.666874 | orchestrator | Tuesday 16 September 2025 00:34:39 +0000 (0:00:04.969) 0:00:33.255 ***** 2025-09-16 00:34:44.666887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-16 00:34:44.666909 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-16 00:34:44.666921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-16 00:34:44.666934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-16 00:34:44.666947 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-16 00:34:44.666959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-16 00:34:44.666972 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-16 00:34:44.666984 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-16 00:34:44.666997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-16 00:34:44.667010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-16 00:34:44.667023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-16 00:34:44.667035 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-16 00:34:44.667056 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-16 00:34:50.190129 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-16 00:34:50.190242 | orchestrator | 2025-09-16 00:34:50.190259 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-16 00:34:50.190273 | orchestrator | Tuesday 16 September 2025 00:34:44 +0000 (0:00:04.923) 0:00:38.179 ***** 2025-09-16 00:34:50.190328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:34:50.190342 | orchestrator | 2025-09-16 00:34:50.190354 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-16 00:34:50.190365 | orchestrator | Tuesday 16 September 2025 00:34:45 +0000 (0:00:01.114) 0:00:39.293 ***** 2025-09-16 00:34:50.190376 | orchestrator | ok: [testbed-manager] 2025-09-16 00:34:50.190388 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:34:50.190399 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:34:50.190409 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:34:50.190420 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:34:50.190430 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:34:50.190441 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:34:50.190452 | orchestrator | 2025-09-16 00:34:50.190462 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-16 00:34:50.190473 | orchestrator | Tuesday 16 September 2025 00:34:46 +0000 (0:00:00.985) 0:00:40.279 ***** 2025-09-16 00:34:50.190484 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-16 00:34:50.190496 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-16 00:34:50.190506 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-16 00:34:50.190517 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-16 00:34:50.190528 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-16 00:34:50.190538 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-16 00:34:50.190549 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-16 00:34:50.190560 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:34:50.190571 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-16 00:34:50.190582 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-16 00:34:50.190593 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-16 00:34:50.190604 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-16 00:34:50.190616 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-16 00:34:50.190628 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:34:50.190640 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-16 00:34:50.190653 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-16 00:34:50.190665 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-16 00:34:50.190677 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-16 00:34:50.190689 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:34:50.190702 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-16 00:34:50.190714 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:34:50.190727 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-16 00:34:50.190762 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-16 00:34:50.190775 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-16 00:34:50.190787 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-16 00:34:50.190799 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-16 00:34:50.190811 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-16 00:34:50.190833 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-16 00:34:50.190846 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:34:50.190859 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:34:50.190872 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-16 00:34:50.190884 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-16 00:34:50.190896 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-16 00:34:50.190909 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-16 00:34:50.190921 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:34:50.190932 | orchestrator | 2025-09-16 00:34:50.190945 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-16 00:34:50.190976 | orchestrator | Tuesday 16 September 2025 00:34:48 +0000 (0:00:01.824) 0:00:42.104 ***** 2025-09-16 00:34:50.190988 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:34:50.190999 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:34:50.191010 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:34:50.191020 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:34:50.191031 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:34:50.191047 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:34:50.191058 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:34:50.191069 | orchestrator | 2025-09-16 00:34:50.191080 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-16 00:34:50.191090 | orchestrator | Tuesday 16 September 2025 00:34:49 +0000 (0:00:00.602) 0:00:42.706 ***** 2025-09-16 00:34:50.191101 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:34:50.191112 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:34:50.191122 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:34:50.191133 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:34:50.191144 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:34:50.191154 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:34:50.191165 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:34:50.191175 | orchestrator | 2025-09-16 00:34:50.191186 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:34:50.191198 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-16 00:34:50.191211 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 00:34:50.191222 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 00:34:50.191233 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 00:34:50.191244 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 00:34:50.191255 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 00:34:50.191266 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 00:34:50.191277 | orchestrator | 2025-09-16 00:34:50.191287 | orchestrator | 2025-09-16 00:34:50.191298 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:34:50.191309 | orchestrator | Tuesday 16 September 2025 00:34:49 +0000 (0:00:00.684) 0:00:43.391 ***** 2025-09-16 00:34:50.191320 | orchestrator | =============================================================================== 2025-09-16 00:34:50.191337 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.97s 2025-09-16 00:34:50.191348 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.92s 2025-09-16 00:34:50.191359 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.33s 2025-09-16 00:34:50.191370 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.32s 2025-09-16 00:34:50.191380 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.25s 2025-09-16 00:34:50.191391 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.82s 2025-09-16 00:34:50.191402 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.82s 2025-09-16 00:34:50.191412 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.67s 2025-09-16 00:34:50.191423 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.60s 2025-09-16 00:34:50.191434 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.53s 2025-09-16 00:34:50.191444 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.39s 2025-09-16 00:34:50.191455 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.31s 2025-09-16 00:34:50.191466 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.15s 2025-09-16 00:34:50.191477 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.15s 2025-09-16 00:34:50.191487 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.11s 2025-09-16 00:34:50.191498 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.06s 2025-09-16 00:34:50.191508 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2025-09-16 00:34:50.191519 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.96s 2025-09-16 00:34:50.191530 | orchestrator | osism.commons.network : Create required directories --------------------- 0.92s 2025-09-16 00:34:50.191541 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.85s 2025-09-16 00:34:50.472047 | orchestrator | + osism apply wireguard 2025-09-16 00:35:02.479414 | orchestrator | 2025-09-16 00:35:02 | INFO  | Task c1f1cc14-9e53-4403-bace-2769a7e1ec3b (wireguard) was prepared for execution. 2025-09-16 00:35:02.479532 | orchestrator | 2025-09-16 00:35:02 | INFO  | It takes a moment until task c1f1cc14-9e53-4403-bace-2769a7e1ec3b (wireguard) has been started and output is visible here. 2025-09-16 00:35:21.767594 | orchestrator | 2025-09-16 00:35:21.767715 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-16 00:35:21.767734 | orchestrator | 2025-09-16 00:35:21.767747 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-16 00:35:21.767815 | orchestrator | Tuesday 16 September 2025 00:35:06 +0000 (0:00:00.227) 0:00:00.227 ***** 2025-09-16 00:35:21.767828 | orchestrator | ok: [testbed-manager] 2025-09-16 00:35:21.767840 | orchestrator | 2025-09-16 00:35:21.767851 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-16 00:35:21.767862 | orchestrator | Tuesday 16 September 2025 00:35:07 +0000 (0:00:01.488) 0:00:01.716 ***** 2025-09-16 00:35:21.767873 | orchestrator | changed: [testbed-manager] 2025-09-16 00:35:21.767885 | orchestrator | 2025-09-16 00:35:21.767896 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-16 00:35:21.767907 | orchestrator | Tuesday 16 September 2025 00:35:14 +0000 (0:00:06.415) 0:00:08.132 ***** 2025-09-16 00:35:21.767918 | orchestrator | changed: [testbed-manager] 2025-09-16 00:35:21.767928 | orchestrator | 2025-09-16 00:35:21.767939 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-16 00:35:21.767950 | orchestrator | Tuesday 16 September 2025 00:35:14 +0000 (0:00:00.556) 0:00:08.688 ***** 2025-09-16 00:35:21.767961 | orchestrator | changed: [testbed-manager] 2025-09-16 00:35:21.767996 | orchestrator | 2025-09-16 00:35:21.768008 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-16 00:35:21.768020 | orchestrator | Tuesday 16 September 2025 00:35:15 +0000 (0:00:00.446) 0:00:09.135 ***** 2025-09-16 00:35:21.768031 | orchestrator | ok: [testbed-manager] 2025-09-16 00:35:21.768042 | orchestrator | 2025-09-16 00:35:21.768052 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-16 00:35:21.768063 | orchestrator | Tuesday 16 September 2025 00:35:15 +0000 (0:00:00.532) 0:00:09.668 ***** 2025-09-16 00:35:21.768074 | orchestrator | ok: [testbed-manager] 2025-09-16 00:35:21.768084 | orchestrator | 2025-09-16 00:35:21.768095 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-16 00:35:21.768107 | orchestrator | Tuesday 16 September 2025 00:35:16 +0000 (0:00:00.533) 0:00:10.202 ***** 2025-09-16 00:35:21.768127 | orchestrator | ok: [testbed-manager] 2025-09-16 00:35:21.768146 | orchestrator | 2025-09-16 00:35:21.768165 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-16 00:35:21.768183 | orchestrator | Tuesday 16 September 2025 00:35:16 +0000 (0:00:00.410) 0:00:10.612 ***** 2025-09-16 00:35:21.768201 | orchestrator | changed: [testbed-manager] 2025-09-16 00:35:21.768221 | orchestrator | 2025-09-16 00:35:21.768239 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-16 00:35:21.768259 | orchestrator | Tuesday 16 September 2025 00:35:18 +0000 (0:00:01.151) 0:00:11.764 ***** 2025-09-16 00:35:21.768275 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-16 00:35:21.768286 | orchestrator | changed: [testbed-manager] 2025-09-16 00:35:21.768297 | orchestrator | 2025-09-16 00:35:21.768308 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-16 00:35:21.768319 | orchestrator | Tuesday 16 September 2025 00:35:18 +0000 (0:00:00.912) 0:00:12.676 ***** 2025-09-16 00:35:21.768330 | orchestrator | changed: [testbed-manager] 2025-09-16 00:35:21.768340 | orchestrator | 2025-09-16 00:35:21.768351 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-16 00:35:21.768362 | orchestrator | Tuesday 16 September 2025 00:35:20 +0000 (0:00:01.650) 0:00:14.326 ***** 2025-09-16 00:35:21.768373 | orchestrator | changed: [testbed-manager] 2025-09-16 00:35:21.768384 | orchestrator | 2025-09-16 00:35:21.768394 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:35:21.768406 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:35:21.768418 | orchestrator | 2025-09-16 00:35:21.768428 | orchestrator | 2025-09-16 00:35:21.768439 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:35:21.768450 | orchestrator | Tuesday 16 September 2025 00:35:21 +0000 (0:00:00.895) 0:00:15.222 ***** 2025-09-16 00:35:21.768461 | orchestrator | =============================================================================== 2025-09-16 00:35:21.768472 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.42s 2025-09-16 00:35:21.768482 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.65s 2025-09-16 00:35:21.768493 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.49s 2025-09-16 00:35:21.768504 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.15s 2025-09-16 00:35:21.768515 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2025-09-16 00:35:21.768525 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2025-09-16 00:35:21.768536 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-09-16 00:35:21.768547 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-09-16 00:35:21.768558 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2025-09-16 00:35:21.768569 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2025-09-16 00:35:21.768589 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-09-16 00:35:22.058619 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-16 00:35:22.093827 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-16 00:35:22.093916 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-16 00:35:22.171163 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 195 0 --:--:-- --:--:-- --:--:-- 197 2025-09-16 00:35:22.183628 | orchestrator | + osism apply --environment custom workarounds 2025-09-16 00:35:23.987918 | orchestrator | 2025-09-16 00:35:23 | INFO  | Trying to run play workarounds in environment custom 2025-09-16 00:35:34.132895 | orchestrator | 2025-09-16 00:35:34 | INFO  | Task 910cea21-a156-405c-a398-b1244ad085cb (workarounds) was prepared for execution. 2025-09-16 00:35:34.133010 | orchestrator | 2025-09-16 00:35:34 | INFO  | It takes a moment until task 910cea21-a156-405c-a398-b1244ad085cb (workarounds) has been started and output is visible here. 2025-09-16 00:35:58.044335 | orchestrator | 2025-09-16 00:35:58.044447 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 00:35:58.044465 | orchestrator | 2025-09-16 00:35:58.044477 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-16 00:35:58.044489 | orchestrator | Tuesday 16 September 2025 00:35:38 +0000 (0:00:00.144) 0:00:00.144 ***** 2025-09-16 00:35:58.044500 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-16 00:35:58.044511 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-16 00:35:58.044522 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-16 00:35:58.044533 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-16 00:35:58.044544 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-16 00:35:58.044555 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-16 00:35:58.044565 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-16 00:35:58.044576 | orchestrator | 2025-09-16 00:35:58.044587 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-16 00:35:58.044598 | orchestrator | 2025-09-16 00:35:58.044609 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-16 00:35:58.044620 | orchestrator | Tuesday 16 September 2025 00:35:38 +0000 (0:00:00.733) 0:00:00.877 ***** 2025-09-16 00:35:58.044631 | orchestrator | ok: [testbed-manager] 2025-09-16 00:35:58.044643 | orchestrator | 2025-09-16 00:35:58.044653 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-16 00:35:58.044664 | orchestrator | 2025-09-16 00:35:58.044675 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-16 00:35:58.044685 | orchestrator | Tuesday 16 September 2025 00:35:40 +0000 (0:00:02.224) 0:00:03.102 ***** 2025-09-16 00:35:58.044696 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:35:58.044707 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:35:58.044718 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:35:58.044729 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:35:58.044739 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:35:58.044750 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:35:58.044760 | orchestrator | 2025-09-16 00:35:58.044805 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-16 00:35:58.044817 | orchestrator | 2025-09-16 00:35:58.044828 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-16 00:35:58.044838 | orchestrator | Tuesday 16 September 2025 00:35:42 +0000 (0:00:01.821) 0:00:04.923 ***** 2025-09-16 00:35:58.044850 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-16 00:35:58.044863 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-16 00:35:58.044894 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-16 00:35:58.044908 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-16 00:35:58.044921 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-16 00:35:58.044934 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-16 00:35:58.044946 | orchestrator | 2025-09-16 00:35:58.044959 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-16 00:35:58.044971 | orchestrator | Tuesday 16 September 2025 00:35:44 +0000 (0:00:01.460) 0:00:06.383 ***** 2025-09-16 00:35:58.044984 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:35:58.044996 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:35:58.045008 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:35:58.045020 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:35:58.045033 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:35:58.045045 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:35:58.045057 | orchestrator | 2025-09-16 00:35:58.045069 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-16 00:35:58.045083 | orchestrator | Tuesday 16 September 2025 00:35:47 +0000 (0:00:03.177) 0:00:09.561 ***** 2025-09-16 00:35:58.045095 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:35:58.045108 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:35:58.045120 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:35:58.045133 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:35:58.045144 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:35:58.045156 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:35:58.045169 | orchestrator | 2025-09-16 00:35:58.045182 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-16 00:35:58.045194 | orchestrator | 2025-09-16 00:35:58.045207 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-16 00:35:58.045219 | orchestrator | Tuesday 16 September 2025 00:35:48 +0000 (0:00:00.648) 0:00:10.209 ***** 2025-09-16 00:35:58.045232 | orchestrator | changed: [testbed-manager] 2025-09-16 00:35:58.045243 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:35:58.045253 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:35:58.045264 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:35:58.045274 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:35:58.045284 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:35:58.045295 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:35:58.045306 | orchestrator | 2025-09-16 00:35:58.045316 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-16 00:35:58.045327 | orchestrator | Tuesday 16 September 2025 00:35:49 +0000 (0:00:01.637) 0:00:11.847 ***** 2025-09-16 00:35:58.045345 | orchestrator | changed: [testbed-manager] 2025-09-16 00:35:58.045356 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:35:58.045367 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:35:58.045378 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:35:58.045388 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:35:58.045399 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:35:58.045426 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:35:58.045437 | orchestrator | 2025-09-16 00:35:58.045448 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-16 00:35:58.045460 | orchestrator | Tuesday 16 September 2025 00:35:51 +0000 (0:00:01.619) 0:00:13.466 ***** 2025-09-16 00:35:58.045471 | orchestrator | ok: [testbed-manager] 2025-09-16 00:35:58.045482 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:35:58.045493 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:35:58.045503 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:35:58.045514 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:35:58.045532 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:35:58.045543 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:35:58.045553 | orchestrator | 2025-09-16 00:35:58.045564 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-16 00:35:58.045575 | orchestrator | Tuesday 16 September 2025 00:35:52 +0000 (0:00:01.489) 0:00:14.956 ***** 2025-09-16 00:35:58.045586 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:35:58.045596 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:35:58.045607 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:35:58.045617 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:35:58.045628 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:35:58.045638 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:35:58.045649 | orchestrator | changed: [testbed-manager] 2025-09-16 00:35:58.045659 | orchestrator | 2025-09-16 00:35:58.045670 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-16 00:35:58.045681 | orchestrator | Tuesday 16 September 2025 00:35:54 +0000 (0:00:02.017) 0:00:16.974 ***** 2025-09-16 00:35:58.045691 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:35:58.045702 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:35:58.045713 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:35:58.045723 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:35:58.045734 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:35:58.045744 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:35:58.045755 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:35:58.045784 | orchestrator | 2025-09-16 00:35:58.045795 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-16 00:35:58.045806 | orchestrator | 2025-09-16 00:35:58.045816 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-16 00:35:58.045827 | orchestrator | Tuesday 16 September 2025 00:35:55 +0000 (0:00:00.551) 0:00:17.525 ***** 2025-09-16 00:35:58.045838 | orchestrator | ok: [testbed-manager] 2025-09-16 00:35:58.045849 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:35:58.045859 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:35:58.045870 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:35:58.045880 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:35:58.045891 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:35:58.045901 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:35:58.045912 | orchestrator | 2025-09-16 00:35:58.045923 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:35:58.045935 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:35:58.045946 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:35:58.045957 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:35:58.045968 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:35:58.045979 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:35:58.045990 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:35:58.046001 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:35:58.046011 | orchestrator | 2025-09-16 00:35:58.046081 | orchestrator | 2025-09-16 00:35:58.046093 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:35:58.046103 | orchestrator | Tuesday 16 September 2025 00:35:58 +0000 (0:00:02.625) 0:00:20.150 ***** 2025-09-16 00:35:58.046123 | orchestrator | =============================================================================== 2025-09-16 00:35:58.046133 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.18s 2025-09-16 00:35:58.046144 | orchestrator | Install python3-docker -------------------------------------------------- 2.63s 2025-09-16 00:35:58.046155 | orchestrator | Apply netplan configuration --------------------------------------------- 2.22s 2025-09-16 00:35:58.046165 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.02s 2025-09-16 00:35:58.046176 | orchestrator | Apply netplan configuration --------------------------------------------- 1.82s 2025-09-16 00:35:58.046186 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2025-09-16 00:35:58.046197 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.62s 2025-09-16 00:35:58.046208 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2025-09-16 00:35:58.046223 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.46s 2025-09-16 00:35:58.046234 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.73s 2025-09-16 00:35:58.046245 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.65s 2025-09-16 00:35:58.046264 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.55s 2025-09-16 00:35:58.588942 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-16 00:36:10.536626 | orchestrator | 2025-09-16 00:36:10 | INFO  | Task 1177b604-8042-41fa-94af-c9a77863b7d1 (reboot) was prepared for execution. 2025-09-16 00:36:10.536734 | orchestrator | 2025-09-16 00:36:10 | INFO  | It takes a moment until task 1177b604-8042-41fa-94af-c9a77863b7d1 (reboot) has been started and output is visible here. 2025-09-16 00:36:20.411574 | orchestrator | 2025-09-16 00:36:20.411665 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-16 00:36:20.411675 | orchestrator | 2025-09-16 00:36:20.411682 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-16 00:36:20.411689 | orchestrator | Tuesday 16 September 2025 00:36:14 +0000 (0:00:00.207) 0:00:00.207 ***** 2025-09-16 00:36:20.411696 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:36:20.411703 | orchestrator | 2025-09-16 00:36:20.411710 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-16 00:36:20.411716 | orchestrator | Tuesday 16 September 2025 00:36:14 +0000 (0:00:00.106) 0:00:00.313 ***** 2025-09-16 00:36:20.411722 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:36:20.411728 | orchestrator | 2025-09-16 00:36:20.411735 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-16 00:36:20.411741 | orchestrator | Tuesday 16 September 2025 00:36:15 +0000 (0:00:00.899) 0:00:01.213 ***** 2025-09-16 00:36:20.411747 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:36:20.411754 | orchestrator | 2025-09-16 00:36:20.411760 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-16 00:36:20.411810 | orchestrator | 2025-09-16 00:36:20.411817 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-16 00:36:20.411823 | orchestrator | Tuesday 16 September 2025 00:36:15 +0000 (0:00:00.113) 0:00:01.327 ***** 2025-09-16 00:36:20.411829 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:36:20.411836 | orchestrator | 2025-09-16 00:36:20.411842 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-16 00:36:20.411848 | orchestrator | Tuesday 16 September 2025 00:36:15 +0000 (0:00:00.113) 0:00:01.440 ***** 2025-09-16 00:36:20.411854 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:36:20.411860 | orchestrator | 2025-09-16 00:36:20.411866 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-16 00:36:20.411872 | orchestrator | Tuesday 16 September 2025 00:36:16 +0000 (0:00:00.693) 0:00:02.134 ***** 2025-09-16 00:36:20.411879 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:36:20.411902 | orchestrator | 2025-09-16 00:36:20.411909 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-16 00:36:20.411915 | orchestrator | 2025-09-16 00:36:20.411921 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-16 00:36:20.411927 | orchestrator | Tuesday 16 September 2025 00:36:16 +0000 (0:00:00.101) 0:00:02.236 ***** 2025-09-16 00:36:20.411933 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:36:20.411939 | orchestrator | 2025-09-16 00:36:20.411945 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-16 00:36:20.411951 | orchestrator | Tuesday 16 September 2025 00:36:16 +0000 (0:00:00.203) 0:00:02.439 ***** 2025-09-16 00:36:20.411957 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:36:20.411964 | orchestrator | 2025-09-16 00:36:20.411970 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-16 00:36:20.411976 | orchestrator | Tuesday 16 September 2025 00:36:17 +0000 (0:00:00.662) 0:00:03.101 ***** 2025-09-16 00:36:20.411982 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:36:20.411988 | orchestrator | 2025-09-16 00:36:20.411994 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-16 00:36:20.412000 | orchestrator | 2025-09-16 00:36:20.412006 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-16 00:36:20.412012 | orchestrator | Tuesday 16 September 2025 00:36:17 +0000 (0:00:00.119) 0:00:03.221 ***** 2025-09-16 00:36:20.412018 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:36:20.412025 | orchestrator | 2025-09-16 00:36:20.412031 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-16 00:36:20.412037 | orchestrator | Tuesday 16 September 2025 00:36:17 +0000 (0:00:00.097) 0:00:03.319 ***** 2025-09-16 00:36:20.412043 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:36:20.412049 | orchestrator | 2025-09-16 00:36:20.412055 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-16 00:36:20.412061 | orchestrator | Tuesday 16 September 2025 00:36:18 +0000 (0:00:00.647) 0:00:03.966 ***** 2025-09-16 00:36:20.412067 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:36:20.412073 | orchestrator | 2025-09-16 00:36:20.412079 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-16 00:36:20.412085 | orchestrator | 2025-09-16 00:36:20.412091 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-16 00:36:20.412097 | orchestrator | Tuesday 16 September 2025 00:36:18 +0000 (0:00:00.104) 0:00:04.071 ***** 2025-09-16 00:36:20.412103 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:36:20.412110 | orchestrator | 2025-09-16 00:36:20.412116 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-16 00:36:20.412122 | orchestrator | Tuesday 16 September 2025 00:36:18 +0000 (0:00:00.101) 0:00:04.173 ***** 2025-09-16 00:36:20.412128 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:36:20.412134 | orchestrator | 2025-09-16 00:36:20.412141 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-16 00:36:20.412148 | orchestrator | Tuesday 16 September 2025 00:36:19 +0000 (0:00:00.677) 0:00:04.851 ***** 2025-09-16 00:36:20.412155 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:36:20.412162 | orchestrator | 2025-09-16 00:36:20.412169 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-16 00:36:20.412176 | orchestrator | 2025-09-16 00:36:20.412183 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-16 00:36:20.412190 | orchestrator | Tuesday 16 September 2025 00:36:19 +0000 (0:00:00.100) 0:00:04.951 ***** 2025-09-16 00:36:20.412197 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:36:20.412204 | orchestrator | 2025-09-16 00:36:20.412211 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-16 00:36:20.412218 | orchestrator | Tuesday 16 September 2025 00:36:19 +0000 (0:00:00.093) 0:00:05.045 ***** 2025-09-16 00:36:20.412225 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:36:20.412232 | orchestrator | 2025-09-16 00:36:20.412239 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-16 00:36:20.412251 | orchestrator | Tuesday 16 September 2025 00:36:20 +0000 (0:00:00.689) 0:00:05.735 ***** 2025-09-16 00:36:20.412270 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:36:20.412278 | orchestrator | 2025-09-16 00:36:20.412285 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:36:20.412293 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:36:20.412301 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:36:20.412308 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:36:20.412315 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:36:20.412322 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:36:20.412329 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:36:20.412335 | orchestrator | 2025-09-16 00:36:20.412342 | orchestrator | 2025-09-16 00:36:20.412349 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:36:20.412356 | orchestrator | Tuesday 16 September 2025 00:36:20 +0000 (0:00:00.037) 0:00:05.772 ***** 2025-09-16 00:36:20.412363 | orchestrator | =============================================================================== 2025-09-16 00:36:20.412370 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.27s 2025-09-16 00:36:20.412381 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.72s 2025-09-16 00:36:20.412388 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.58s 2025-09-16 00:36:20.699925 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-16 00:36:32.800928 | orchestrator | 2025-09-16 00:36:32 | INFO  | Task 46b91528-6ede-4d47-ba1b-fbbea798cf97 (wait-for-connection) was prepared for execution. 2025-09-16 00:36:32.801065 | orchestrator | 2025-09-16 00:36:32 | INFO  | It takes a moment until task 46b91528-6ede-4d47-ba1b-fbbea798cf97 (wait-for-connection) has been started and output is visible here. 2025-09-16 00:36:48.686385 | orchestrator | 2025-09-16 00:36:48.686494 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-16 00:36:48.686510 | orchestrator | 2025-09-16 00:36:48.686521 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-16 00:36:48.686531 | orchestrator | Tuesday 16 September 2025 00:36:36 +0000 (0:00:00.235) 0:00:00.235 ***** 2025-09-16 00:36:48.686541 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:36:48.686552 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:36:48.686561 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:36:48.686571 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:36:48.686580 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:36:48.686589 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:36:48.686599 | orchestrator | 2025-09-16 00:36:48.686608 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:36:48.686619 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:36:48.686630 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:36:48.686640 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:36:48.686674 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:36:48.686684 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:36:48.686694 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:36:48.686703 | orchestrator | 2025-09-16 00:36:48.686713 | orchestrator | 2025-09-16 00:36:48.686722 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:36:48.686746 | orchestrator | Tuesday 16 September 2025 00:36:48 +0000 (0:00:11.630) 0:00:11.866 ***** 2025-09-16 00:36:48.686757 | orchestrator | =============================================================================== 2025-09-16 00:36:48.686820 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.63s 2025-09-16 00:36:48.941461 | orchestrator | + osism apply hddtemp 2025-09-16 00:37:00.946144 | orchestrator | 2025-09-16 00:37:00 | INFO  | Task 239d82ef-daf4-416d-adec-b4bcbc44833c (hddtemp) was prepared for execution. 2025-09-16 00:37:00.946255 | orchestrator | 2025-09-16 00:37:00 | INFO  | It takes a moment until task 239d82ef-daf4-416d-adec-b4bcbc44833c (hddtemp) has been started and output is visible here. 2025-09-16 00:37:27.242644 | orchestrator | 2025-09-16 00:37:27.242761 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-16 00:37:27.242828 | orchestrator | 2025-09-16 00:37:27.242841 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-16 00:37:27.242853 | orchestrator | Tuesday 16 September 2025 00:37:04 +0000 (0:00:00.235) 0:00:00.235 ***** 2025-09-16 00:37:27.242865 | orchestrator | ok: [testbed-manager] 2025-09-16 00:37:27.242877 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:37:27.242888 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:37:27.242899 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:37:27.242910 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:37:27.242921 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:37:27.242931 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:37:27.242942 | orchestrator | 2025-09-16 00:37:27.242953 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-16 00:37:27.242964 | orchestrator | Tuesday 16 September 2025 00:37:05 +0000 (0:00:00.534) 0:00:00.769 ***** 2025-09-16 00:37:27.242977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:37:27.242991 | orchestrator | 2025-09-16 00:37:27.243003 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-16 00:37:27.243014 | orchestrator | Tuesday 16 September 2025 00:37:06 +0000 (0:00:00.914) 0:00:01.683 ***** 2025-09-16 00:37:27.243024 | orchestrator | ok: [testbed-manager] 2025-09-16 00:37:27.243036 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:37:27.243046 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:37:27.243057 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:37:27.243068 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:37:27.243079 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:37:27.243089 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:37:27.243100 | orchestrator | 2025-09-16 00:37:27.243111 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-16 00:37:27.243122 | orchestrator | Tuesday 16 September 2025 00:37:08 +0000 (0:00:01.899) 0:00:03.583 ***** 2025-09-16 00:37:27.243133 | orchestrator | changed: [testbed-manager] 2025-09-16 00:37:27.243144 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:37:27.243155 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:37:27.243166 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:37:27.243178 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:37:27.243214 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:37:27.243226 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:37:27.243238 | orchestrator | 2025-09-16 00:37:27.243251 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-16 00:37:27.243263 | orchestrator | Tuesday 16 September 2025 00:37:09 +0000 (0:00:00.983) 0:00:04.567 ***** 2025-09-16 00:37:27.243276 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:37:27.243288 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:37:27.243300 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:37:27.243312 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:37:27.243324 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:37:27.243337 | orchestrator | ok: [testbed-manager] 2025-09-16 00:37:27.243349 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:37:27.243361 | orchestrator | 2025-09-16 00:37:27.243374 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-16 00:37:27.243386 | orchestrator | Tuesday 16 September 2025 00:37:10 +0000 (0:00:01.033) 0:00:05.601 ***** 2025-09-16 00:37:27.243398 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:37:27.243411 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:37:27.243423 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:37:27.243435 | orchestrator | changed: [testbed-manager] 2025-09-16 00:37:27.243448 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:37:27.243459 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:37:27.243471 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:37:27.243484 | orchestrator | 2025-09-16 00:37:27.243496 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-16 00:37:27.243508 | orchestrator | Tuesday 16 September 2025 00:37:10 +0000 (0:00:00.764) 0:00:06.366 ***** 2025-09-16 00:37:27.243521 | orchestrator | changed: [testbed-manager] 2025-09-16 00:37:27.243534 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:37:27.243544 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:37:27.243555 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:37:27.243566 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:37:27.243576 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:37:27.243587 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:37:27.243598 | orchestrator | 2025-09-16 00:37:27.243608 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-16 00:37:27.243619 | orchestrator | Tuesday 16 September 2025 00:37:23 +0000 (0:00:12.741) 0:00:19.107 ***** 2025-09-16 00:37:27.243630 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:37:27.243642 | orchestrator | 2025-09-16 00:37:27.243652 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-16 00:37:27.243663 | orchestrator | Tuesday 16 September 2025 00:37:25 +0000 (0:00:01.362) 0:00:20.470 ***** 2025-09-16 00:37:27.243674 | orchestrator | changed: [testbed-manager] 2025-09-16 00:37:27.243698 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:37:27.243710 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:37:27.243721 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:37:27.243731 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:37:27.243742 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:37:27.243752 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:37:27.243763 | orchestrator | 2025-09-16 00:37:27.243798 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:37:27.243810 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:37:27.243839 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:37:27.243851 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:37:27.243871 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:37:27.243882 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:37:27.243893 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:37:27.243904 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:37:27.243915 | orchestrator | 2025-09-16 00:37:27.243926 | orchestrator | 2025-09-16 00:37:27.243936 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:37:27.243947 | orchestrator | Tuesday 16 September 2025 00:37:26 +0000 (0:00:01.807) 0:00:22.277 ***** 2025-09-16 00:37:27.243958 | orchestrator | =============================================================================== 2025-09-16 00:37:27.243969 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.74s 2025-09-16 00:37:27.243980 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.90s 2025-09-16 00:37:27.243990 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.81s 2025-09-16 00:37:27.244001 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.36s 2025-09-16 00:37:27.244012 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.03s 2025-09-16 00:37:27.244022 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.98s 2025-09-16 00:37:27.244033 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.91s 2025-09-16 00:37:27.244044 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.76s 2025-09-16 00:37:27.244054 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.53s 2025-09-16 00:37:27.510208 | orchestrator | ++ semver latest 7.1.1 2025-09-16 00:37:27.558276 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-16 00:37:27.558334 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-16 00:37:27.558348 | orchestrator | + sudo systemctl restart manager.service 2025-09-16 00:37:55.738307 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-16 00:37:55.738422 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-16 00:37:55.738438 | orchestrator | + local max_attempts=60 2025-09-16 00:37:55.738452 | orchestrator | + local name=ceph-ansible 2025-09-16 00:37:55.738463 | orchestrator | + local attempt_num=1 2025-09-16 00:37:55.738475 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-16 00:37:55.771441 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-16 00:37:55.771491 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-16 00:37:55.771508 | orchestrator | + sleep 5 2025-09-16 00:38:00.775711 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-16 00:38:00.801424 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-16 00:38:00.801462 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-16 00:38:00.801477 | orchestrator | + sleep 5 2025-09-16 00:38:05.805251 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-16 00:38:05.843283 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-16 00:38:05.843311 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-16 00:38:05.843319 | orchestrator | + sleep 5 2025-09-16 00:38:10.848914 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-16 00:38:10.889950 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-16 00:38:10.890068 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-16 00:38:10.890084 | orchestrator | + sleep 5 2025-09-16 00:38:15.895083 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-16 00:38:15.929239 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-16 00:38:15.929319 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-16 00:38:15.929362 | orchestrator | + sleep 5 2025-09-16 00:38:20.933379 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-16 00:38:20.971802 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-16 00:38:20.971886 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-16 00:38:20.971897 | orchestrator | + sleep 5 2025-09-16 00:38:25.975542 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-16 00:38:26.016894 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-16 00:38:26.016938 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-16 00:38:26.016951 | orchestrator | + sleep 5 2025-09-16 00:38:31.020659 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-16 00:38:31.047861 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-16 00:38:31.047887 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-16 00:38:31.047892 | orchestrator | + sleep 5 2025-09-16 00:38:36.049854 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-16 00:38:36.069233 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-16 00:38:36.069248 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-16 00:38:36.069253 | orchestrator | + sleep 5 2025-09-16 00:38:41.072839 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-16 00:38:41.107500 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-16 00:38:41.107542 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-16 00:38:41.107556 | orchestrator | + sleep 5 2025-09-16 00:38:46.112429 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-16 00:38:46.145960 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-16 00:38:46.146062 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-16 00:38:46.146079 | orchestrator | + sleep 5 2025-09-16 00:38:51.150374 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-16 00:38:51.187970 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-16 00:38:51.188034 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-16 00:38:51.188049 | orchestrator | + sleep 5 2025-09-16 00:38:56.193614 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-16 00:38:56.230318 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-16 00:38:56.230379 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-16 00:38:56.230392 | orchestrator | + sleep 5 2025-09-16 00:39:01.234313 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-16 00:39:01.276252 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-16 00:39:01.276310 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-16 00:39:01.276325 | orchestrator | + local max_attempts=60 2025-09-16 00:39:01.276887 | orchestrator | + local name=kolla-ansible 2025-09-16 00:39:01.276910 | orchestrator | + local attempt_num=1 2025-09-16 00:39:01.277108 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-16 00:39:01.311698 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-16 00:39:01.311748 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-16 00:39:01.311761 | orchestrator | + local max_attempts=60 2025-09-16 00:39:01.311773 | orchestrator | + local name=osism-ansible 2025-09-16 00:39:01.311813 | orchestrator | + local attempt_num=1 2025-09-16 00:39:01.311825 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-16 00:39:01.345894 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-16 00:39:01.345925 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-16 00:39:01.345937 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-16 00:39:01.520034 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-16 00:39:01.682610 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-16 00:39:01.845479 | orchestrator | ARA in osism-ansible already disabled. 2025-09-16 00:39:01.996268 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-16 00:39:01.998158 | orchestrator | + osism apply gather-facts 2025-09-16 00:39:14.006476 | orchestrator | 2025-09-16 00:39:14 | INFO  | Task caf4e705-584c-4ff1-b149-6a3c4b1ad04b (gather-facts) was prepared for execution. 2025-09-16 00:39:14.006577 | orchestrator | 2025-09-16 00:39:14 | INFO  | It takes a moment until task caf4e705-584c-4ff1-b149-6a3c4b1ad04b (gather-facts) has been started and output is visible here. 2025-09-16 00:39:26.674081 | orchestrator | 2025-09-16 00:39:26.674192 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-16 00:39:26.674232 | orchestrator | 2025-09-16 00:39:26.674244 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-16 00:39:26.674254 | orchestrator | Tuesday 16 September 2025 00:39:17 +0000 (0:00:00.200) 0:00:00.200 ***** 2025-09-16 00:39:26.674265 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:39:26.674276 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:39:26.674286 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:39:26.674296 | orchestrator | ok: [testbed-manager] 2025-09-16 00:39:26.674306 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:39:26.674315 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:39:26.674325 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:39:26.674335 | orchestrator | 2025-09-16 00:39:26.674344 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-16 00:39:26.674354 | orchestrator | 2025-09-16 00:39:26.674364 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-16 00:39:26.674374 | orchestrator | Tuesday 16 September 2025 00:39:25 +0000 (0:00:08.254) 0:00:08.455 ***** 2025-09-16 00:39:26.674384 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:39:26.674395 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:39:26.674404 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:39:26.674414 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:39:26.674424 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:39:26.674433 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:39:26.674443 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:39:26.674453 | orchestrator | 2025-09-16 00:39:26.674462 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:39:26.674473 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:39:26.674484 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:39:26.674493 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:39:26.674503 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:39:26.674513 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:39:26.674523 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:39:26.674533 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:39:26.674543 | orchestrator | 2025-09-16 00:39:26.674553 | orchestrator | 2025-09-16 00:39:26.674563 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:39:26.674573 | orchestrator | Tuesday 16 September 2025 00:39:26 +0000 (0:00:00.479) 0:00:08.934 ***** 2025-09-16 00:39:26.674596 | orchestrator | =============================================================================== 2025-09-16 00:39:26.674608 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.25s 2025-09-16 00:39:26.674619 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2025-09-16 00:39:26.964750 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-16 00:39:26.975284 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-16 00:39:26.993146 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-16 00:39:27.010864 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-16 00:39:27.028086 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-16 00:39:27.044672 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-16 00:39:27.064598 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-16 00:39:27.082075 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-16 00:39:27.100133 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-16 00:39:27.112803 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-16 00:39:27.132067 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-16 00:39:27.147612 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-16 00:39:27.164876 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-16 00:39:27.184230 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-16 00:39:27.200745 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-16 00:39:27.218206 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-16 00:39:27.232767 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-16 00:39:27.250179 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-16 00:39:27.262270 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-16 00:39:27.278117 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-16 00:39:27.293144 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-16 00:39:27.583178 | orchestrator | ok: Runtime: 0:23:18.420572 2025-09-16 00:39:27.680292 | 2025-09-16 00:39:27.680434 | TASK [Deploy services] 2025-09-16 00:39:28.211839 | orchestrator | skipping: Conditional result was False 2025-09-16 00:39:28.229748 | 2025-09-16 00:39:28.229933 | TASK [Deploy in a nutshell] 2025-09-16 00:39:28.948637 | orchestrator | + set -e 2025-09-16 00:39:28.948845 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-16 00:39:28.948870 | orchestrator | ++ export INTERACTIVE=false 2025-09-16 00:39:28.948893 | orchestrator | ++ INTERACTIVE=false 2025-09-16 00:39:28.948906 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-16 00:39:28.948919 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-16 00:39:28.948943 | orchestrator | + source /opt/manager-vars.sh 2025-09-16 00:39:28.948986 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-16 00:39:28.950060 | orchestrator | 2025-09-16 00:39:28.950085 | orchestrator | # PULL IMAGES 2025-09-16 00:39:28.950096 | orchestrator | 2025-09-16 00:39:28.950108 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-16 00:39:28.950125 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-16 00:39:28.950136 | orchestrator | ++ CEPH_VERSION=reef 2025-09-16 00:39:28.950154 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-16 00:39:28.950165 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-16 00:39:28.950187 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-16 00:39:28.950198 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-16 00:39:28.950212 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-16 00:39:28.950223 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-16 00:39:28.950234 | orchestrator | ++ export ARA=false 2025-09-16 00:39:28.950245 | orchestrator | ++ ARA=false 2025-09-16 00:39:28.950257 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-16 00:39:28.950268 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-16 00:39:28.950279 | orchestrator | ++ export TEMPEST=true 2025-09-16 00:39:28.950289 | orchestrator | ++ TEMPEST=true 2025-09-16 00:39:28.950300 | orchestrator | ++ export IS_ZUUL=true 2025-09-16 00:39:28.950311 | orchestrator | ++ IS_ZUUL=true 2025-09-16 00:39:28.950322 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.163 2025-09-16 00:39:28.950333 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.163 2025-09-16 00:39:28.950344 | orchestrator | ++ export EXTERNAL_API=false 2025-09-16 00:39:28.950355 | orchestrator | ++ EXTERNAL_API=false 2025-09-16 00:39:28.950365 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-16 00:39:28.950376 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-16 00:39:28.950387 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-16 00:39:28.950398 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-16 00:39:28.950409 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-16 00:39:28.950439 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-16 00:39:28.950451 | orchestrator | + echo 2025-09-16 00:39:28.950462 | orchestrator | + echo '# PULL IMAGES' 2025-09-16 00:39:28.950473 | orchestrator | + echo 2025-09-16 00:39:28.950488 | orchestrator | ++ semver latest 7.0.0 2025-09-16 00:39:29.001050 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-16 00:39:29.001111 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-16 00:39:29.001126 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-16 00:39:30.806702 | orchestrator | 2025-09-16 00:39:30 | INFO  | Trying to run play pull-images in environment custom 2025-09-16 00:39:40.890309 | orchestrator | 2025-09-16 00:39:40 | INFO  | Task c3a59ebc-ef11-4973-8a94-283f3d46fb1b (pull-images) was prepared for execution. 2025-09-16 00:39:40.890426 | orchestrator | 2025-09-16 00:39:40 | INFO  | Task c3a59ebc-ef11-4973-8a94-283f3d46fb1b is running in background. No more output. Check ARA for logs. 2025-09-16 00:39:42.881662 | orchestrator | 2025-09-16 00:39:42 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-16 00:39:52.982371 | orchestrator | 2025-09-16 00:39:52 | INFO  | Task cbd58b7d-661a-4e9e-98b1-eb8fb160d79c (wipe-partitions) was prepared for execution. 2025-09-16 00:39:52.982493 | orchestrator | 2025-09-16 00:39:52 | INFO  | It takes a moment until task cbd58b7d-661a-4e9e-98b1-eb8fb160d79c (wipe-partitions) has been started and output is visible here. 2025-09-16 00:40:05.201604 | orchestrator | 2025-09-16 00:40:05.201707 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-16 00:40:05.201724 | orchestrator | 2025-09-16 00:40:05.201737 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-16 00:40:05.201755 | orchestrator | Tuesday 16 September 2025 00:39:57 +0000 (0:00:00.148) 0:00:00.148 ***** 2025-09-16 00:40:05.201769 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:40:05.201825 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:40:05.201843 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:40:05.201855 | orchestrator | 2025-09-16 00:40:05.201866 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-16 00:40:05.201903 | orchestrator | Tuesday 16 September 2025 00:39:57 +0000 (0:00:00.553) 0:00:00.702 ***** 2025-09-16 00:40:05.201915 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:05.201927 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:05.201942 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:40:05.201953 | orchestrator | 2025-09-16 00:40:05.201964 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-16 00:40:05.201975 | orchestrator | Tuesday 16 September 2025 00:39:57 +0000 (0:00:00.243) 0:00:00.946 ***** 2025-09-16 00:40:05.201986 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:40:05.201998 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:40:05.202009 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:40:05.202073 | orchestrator | 2025-09-16 00:40:05.202086 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-16 00:40:05.202097 | orchestrator | Tuesday 16 September 2025 00:39:58 +0000 (0:00:00.793) 0:00:01.740 ***** 2025-09-16 00:40:05.202107 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:05.202118 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:05.202159 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:40:05.202173 | orchestrator | 2025-09-16 00:40:05.202187 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-16 00:40:05.202199 | orchestrator | Tuesday 16 September 2025 00:39:58 +0000 (0:00:00.238) 0:00:01.978 ***** 2025-09-16 00:40:05.202212 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-16 00:40:05.202230 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-16 00:40:05.202243 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-16 00:40:05.202256 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-16 00:40:05.202268 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-16 00:40:05.202281 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-16 00:40:05.202293 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-16 00:40:05.202306 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-16 00:40:05.202318 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-16 00:40:05.202331 | orchestrator | 2025-09-16 00:40:05.202343 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-16 00:40:05.202356 | orchestrator | Tuesday 16 September 2025 00:40:00 +0000 (0:00:01.243) 0:00:03.221 ***** 2025-09-16 00:40:05.202370 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-16 00:40:05.202383 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-16 00:40:05.202395 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-16 00:40:05.202407 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-16 00:40:05.202420 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-16 00:40:05.202432 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-16 00:40:05.202444 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-16 00:40:05.202456 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-16 00:40:05.202469 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-16 00:40:05.202481 | orchestrator | 2025-09-16 00:40:05.202494 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-16 00:40:05.202505 | orchestrator | Tuesday 16 September 2025 00:40:01 +0000 (0:00:01.337) 0:00:04.559 ***** 2025-09-16 00:40:05.202516 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-16 00:40:05.202526 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-16 00:40:05.202537 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-16 00:40:05.202548 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-16 00:40:05.202559 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-16 00:40:05.202577 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-16 00:40:05.202588 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-16 00:40:05.202609 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-16 00:40:05.202620 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-16 00:40:05.202630 | orchestrator | 2025-09-16 00:40:05.202641 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-16 00:40:05.202652 | orchestrator | Tuesday 16 September 2025 00:40:03 +0000 (0:00:02.254) 0:00:06.813 ***** 2025-09-16 00:40:05.202663 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:40:05.202674 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:40:05.202685 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:40:05.202695 | orchestrator | 2025-09-16 00:40:05.202706 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-16 00:40:05.202717 | orchestrator | Tuesday 16 September 2025 00:40:04 +0000 (0:00:00.616) 0:00:07.429 ***** 2025-09-16 00:40:05.202728 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:40:05.202739 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:40:05.202750 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:40:05.202760 | orchestrator | 2025-09-16 00:40:05.202771 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:40:05.202802 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:40:05.202814 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:40:05.202844 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:40:05.202856 | orchestrator | 2025-09-16 00:40:05.202867 | orchestrator | 2025-09-16 00:40:05.202878 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:40:05.202889 | orchestrator | Tuesday 16 September 2025 00:40:04 +0000 (0:00:00.585) 0:00:08.014 ***** 2025-09-16 00:40:05.202900 | orchestrator | =============================================================================== 2025-09-16 00:40:05.202911 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.25s 2025-09-16 00:40:05.202921 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.34s 2025-09-16 00:40:05.202932 | orchestrator | Check device availability ----------------------------------------------- 1.24s 2025-09-16 00:40:05.202943 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.79s 2025-09-16 00:40:05.202954 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2025-09-16 00:40:05.202965 | orchestrator | Request device events from the kernel ----------------------------------- 0.59s 2025-09-16 00:40:05.202975 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.55s 2025-09-16 00:40:05.202986 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2025-09-16 00:40:05.202997 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2025-09-16 00:40:17.073609 | orchestrator | 2025-09-16 00:40:17 | INFO  | Task e9aad7ac-895d-4a77-849d-944b4fb74e63 (facts) was prepared for execution. 2025-09-16 00:40:17.073725 | orchestrator | 2025-09-16 00:40:17 | INFO  | It takes a moment until task e9aad7ac-895d-4a77-849d-944b4fb74e63 (facts) has been started and output is visible here. 2025-09-16 00:40:29.349764 | orchestrator | 2025-09-16 00:40:29.349923 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-16 00:40:29.349942 | orchestrator | 2025-09-16 00:40:29.349954 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-16 00:40:29.349966 | orchestrator | Tuesday 16 September 2025 00:40:20 +0000 (0:00:00.243) 0:00:00.243 ***** 2025-09-16 00:40:29.349978 | orchestrator | ok: [testbed-manager] 2025-09-16 00:40:29.349990 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:40:29.350001 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:40:29.350095 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:40:29.350108 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:40:29.350119 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:40:29.350130 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:40:29.350141 | orchestrator | 2025-09-16 00:40:29.350155 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-16 00:40:29.350166 | orchestrator | Tuesday 16 September 2025 00:40:21 +0000 (0:00:01.012) 0:00:01.255 ***** 2025-09-16 00:40:29.350177 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:40:29.350188 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:40:29.350199 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:40:29.350209 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:40:29.350220 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:29.350230 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:29.350241 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:40:29.350252 | orchestrator | 2025-09-16 00:40:29.350262 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-16 00:40:29.350273 | orchestrator | 2025-09-16 00:40:29.350284 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-16 00:40:29.350295 | orchestrator | Tuesday 16 September 2025 00:40:22 +0000 (0:00:01.081) 0:00:02.337 ***** 2025-09-16 00:40:29.350306 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:40:29.350318 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:40:29.350331 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:40:29.350343 | orchestrator | ok: [testbed-manager] 2025-09-16 00:40:29.350356 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:40:29.350367 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:40:29.350379 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:40:29.350391 | orchestrator | 2025-09-16 00:40:29.350404 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-16 00:40:29.350416 | orchestrator | 2025-09-16 00:40:29.350429 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-16 00:40:29.350458 | orchestrator | Tuesday 16 September 2025 00:40:28 +0000 (0:00:05.482) 0:00:07.819 ***** 2025-09-16 00:40:29.350471 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:40:29.350484 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:40:29.350497 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:40:29.350509 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:40:29.350521 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:29.350533 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:29.350545 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:40:29.350557 | orchestrator | 2025-09-16 00:40:29.350569 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:40:29.350582 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:40:29.350596 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:40:29.350606 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:40:29.350617 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:40:29.350628 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:40:29.350639 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:40:29.350650 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:40:29.350660 | orchestrator | 2025-09-16 00:40:29.350681 | orchestrator | 2025-09-16 00:40:29.350692 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:40:29.350703 | orchestrator | Tuesday 16 September 2025 00:40:28 +0000 (0:00:00.706) 0:00:08.526 ***** 2025-09-16 00:40:29.350713 | orchestrator | =============================================================================== 2025-09-16 00:40:29.350724 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.48s 2025-09-16 00:40:29.350734 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.08s 2025-09-16 00:40:29.350745 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.01s 2025-09-16 00:40:29.350756 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.71s 2025-09-16 00:40:31.523312 | orchestrator | 2025-09-16 00:40:31 | INFO  | Task 075e8922-de19-4f74-8a02-6cbcde379d1f (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-16 00:40:31.523834 | orchestrator | 2025-09-16 00:40:31 | INFO  | It takes a moment until task 075e8922-de19-4f74-8a02-6cbcde379d1f (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-16 00:40:42.998216 | orchestrator | 2025-09-16 00:40:42.998336 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-16 00:40:42.998352 | orchestrator | 2025-09-16 00:40:42.998364 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-16 00:40:42.998377 | orchestrator | Tuesday 16 September 2025 00:40:35 +0000 (0:00:00.425) 0:00:00.425 ***** 2025-09-16 00:40:42.998389 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-16 00:40:42.998400 | orchestrator | 2025-09-16 00:40:42.998411 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-16 00:40:42.998422 | orchestrator | Tuesday 16 September 2025 00:40:36 +0000 (0:00:00.277) 0:00:00.703 ***** 2025-09-16 00:40:42.998433 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:40:42.998445 | orchestrator | 2025-09-16 00:40:42.998456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:42.998467 | orchestrator | Tuesday 16 September 2025 00:40:36 +0000 (0:00:00.226) 0:00:00.929 ***** 2025-09-16 00:40:42.998478 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-16 00:40:42.998490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-16 00:40:42.998501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-16 00:40:42.998512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-16 00:40:42.998522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-16 00:40:42.998533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-16 00:40:42.998544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-16 00:40:42.998554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-16 00:40:42.998565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-16 00:40:42.998576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-16 00:40:42.998587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-16 00:40:42.998606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-16 00:40:42.998618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-16 00:40:42.998629 | orchestrator | 2025-09-16 00:40:42.998640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:42.998651 | orchestrator | Tuesday 16 September 2025 00:40:36 +0000 (0:00:00.364) 0:00:01.294 ***** 2025-09-16 00:40:42.998664 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:42.998695 | orchestrator | 2025-09-16 00:40:42.998707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:42.998719 | orchestrator | Tuesday 16 September 2025 00:40:37 +0000 (0:00:00.443) 0:00:01.737 ***** 2025-09-16 00:40:42.998731 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:42.998743 | orchestrator | 2025-09-16 00:40:42.998756 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:42.998768 | orchestrator | Tuesday 16 September 2025 00:40:37 +0000 (0:00:00.186) 0:00:01.924 ***** 2025-09-16 00:40:42.998805 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:42.998817 | orchestrator | 2025-09-16 00:40:42.998829 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:42.998840 | orchestrator | Tuesday 16 September 2025 00:40:37 +0000 (0:00:00.179) 0:00:02.104 ***** 2025-09-16 00:40:42.998853 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:42.998869 | orchestrator | 2025-09-16 00:40:42.998882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:42.998894 | orchestrator | Tuesday 16 September 2025 00:40:37 +0000 (0:00:00.216) 0:00:02.321 ***** 2025-09-16 00:40:42.998907 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:42.998919 | orchestrator | 2025-09-16 00:40:42.998932 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:42.998944 | orchestrator | Tuesday 16 September 2025 00:40:38 +0000 (0:00:00.186) 0:00:02.507 ***** 2025-09-16 00:40:42.998957 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:42.998969 | orchestrator | 2025-09-16 00:40:42.998981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:42.998994 | orchestrator | Tuesday 16 September 2025 00:40:38 +0000 (0:00:00.180) 0:00:02.688 ***** 2025-09-16 00:40:42.999006 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:42.999018 | orchestrator | 2025-09-16 00:40:42.999028 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:42.999039 | orchestrator | Tuesday 16 September 2025 00:40:38 +0000 (0:00:00.197) 0:00:02.886 ***** 2025-09-16 00:40:42.999050 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:42.999061 | orchestrator | 2025-09-16 00:40:42.999072 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:42.999082 | orchestrator | Tuesday 16 September 2025 00:40:38 +0000 (0:00:00.207) 0:00:03.093 ***** 2025-09-16 00:40:42.999093 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf) 2025-09-16 00:40:42.999105 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf) 2025-09-16 00:40:42.999116 | orchestrator | 2025-09-16 00:40:42.999127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:42.999138 | orchestrator | Tuesday 16 September 2025 00:40:39 +0000 (0:00:00.375) 0:00:03.468 ***** 2025-09-16 00:40:42.999166 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_216f9756-46fe-48b3-8a57-6cc5b7e0c275) 2025-09-16 00:40:42.999178 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_216f9756-46fe-48b3-8a57-6cc5b7e0c275) 2025-09-16 00:40:42.999189 | orchestrator | 2025-09-16 00:40:42.999200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:42.999210 | orchestrator | Tuesday 16 September 2025 00:40:39 +0000 (0:00:00.378) 0:00:03.847 ***** 2025-09-16 00:40:42.999221 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ebe7fd99-ddf0-4119-8dea-cb8b427f2aed) 2025-09-16 00:40:42.999232 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ebe7fd99-ddf0-4119-8dea-cb8b427f2aed) 2025-09-16 00:40:42.999243 | orchestrator | 2025-09-16 00:40:42.999253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:42.999264 | orchestrator | Tuesday 16 September 2025 00:40:39 +0000 (0:00:00.552) 0:00:04.399 ***** 2025-09-16 00:40:42.999275 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6b7c66eb-e150-40bb-863f-cd4924cbb0ab) 2025-09-16 00:40:42.999294 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6b7c66eb-e150-40bb-863f-cd4924cbb0ab) 2025-09-16 00:40:42.999305 | orchestrator | 2025-09-16 00:40:42.999315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:42.999326 | orchestrator | Tuesday 16 September 2025 00:40:40 +0000 (0:00:00.600) 0:00:05.000 ***** 2025-09-16 00:40:42.999337 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-16 00:40:42.999348 | orchestrator | 2025-09-16 00:40:42.999358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:42.999375 | orchestrator | Tuesday 16 September 2025 00:40:41 +0000 (0:00:00.662) 0:00:05.663 ***** 2025-09-16 00:40:42.999386 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-16 00:40:42.999397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-16 00:40:42.999408 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-16 00:40:42.999418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-16 00:40:42.999429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-16 00:40:42.999440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-16 00:40:42.999450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-16 00:40:42.999461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-16 00:40:42.999472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-16 00:40:42.999482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-16 00:40:42.999493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-16 00:40:42.999504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-16 00:40:42.999515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-16 00:40:42.999525 | orchestrator | 2025-09-16 00:40:42.999536 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:42.999547 | orchestrator | Tuesday 16 September 2025 00:40:41 +0000 (0:00:00.340) 0:00:06.003 ***** 2025-09-16 00:40:42.999558 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:42.999568 | orchestrator | 2025-09-16 00:40:42.999579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:42.999590 | orchestrator | Tuesday 16 September 2025 00:40:41 +0000 (0:00:00.174) 0:00:06.178 ***** 2025-09-16 00:40:42.999601 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:42.999611 | orchestrator | 2025-09-16 00:40:42.999622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:42.999633 | orchestrator | Tuesday 16 September 2025 00:40:41 +0000 (0:00:00.202) 0:00:06.381 ***** 2025-09-16 00:40:42.999643 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:42.999654 | orchestrator | 2025-09-16 00:40:42.999665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:42.999675 | orchestrator | Tuesday 16 September 2025 00:40:42 +0000 (0:00:00.179) 0:00:06.560 ***** 2025-09-16 00:40:42.999686 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:42.999697 | orchestrator | 2025-09-16 00:40:42.999708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:42.999718 | orchestrator | Tuesday 16 September 2025 00:40:42 +0000 (0:00:00.182) 0:00:06.743 ***** 2025-09-16 00:40:42.999729 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:42.999740 | orchestrator | 2025-09-16 00:40:42.999757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:42.999768 | orchestrator | Tuesday 16 September 2025 00:40:42 +0000 (0:00:00.178) 0:00:06.922 ***** 2025-09-16 00:40:42.999808 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:42.999819 | orchestrator | 2025-09-16 00:40:42.999829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:42.999840 | orchestrator | Tuesday 16 September 2025 00:40:42 +0000 (0:00:00.180) 0:00:07.102 ***** 2025-09-16 00:40:42.999851 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:42.999862 | orchestrator | 2025-09-16 00:40:42.999872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:42.999883 | orchestrator | Tuesday 16 September 2025 00:40:42 +0000 (0:00:00.171) 0:00:07.273 ***** 2025-09-16 00:40:42.999900 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.761669 | orchestrator | 2025-09-16 00:40:49.761827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:49.761846 | orchestrator | Tuesday 16 September 2025 00:40:42 +0000 (0:00:00.170) 0:00:07.444 ***** 2025-09-16 00:40:49.761858 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-16 00:40:49.761872 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-16 00:40:49.761883 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-16 00:40:49.761894 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-16 00:40:49.761905 | orchestrator | 2025-09-16 00:40:49.761916 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:49.761927 | orchestrator | Tuesday 16 September 2025 00:40:43 +0000 (0:00:00.818) 0:00:08.263 ***** 2025-09-16 00:40:49.761938 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.761949 | orchestrator | 2025-09-16 00:40:49.761960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:49.761971 | orchestrator | Tuesday 16 September 2025 00:40:43 +0000 (0:00:00.169) 0:00:08.432 ***** 2025-09-16 00:40:49.761982 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.761993 | orchestrator | 2025-09-16 00:40:49.762004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:49.762015 | orchestrator | Tuesday 16 September 2025 00:40:44 +0000 (0:00:00.169) 0:00:08.602 ***** 2025-09-16 00:40:49.762079 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.762091 | orchestrator | 2025-09-16 00:40:49.762102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:49.762112 | orchestrator | Tuesday 16 September 2025 00:40:44 +0000 (0:00:00.179) 0:00:08.781 ***** 2025-09-16 00:40:49.762123 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.762134 | orchestrator | 2025-09-16 00:40:49.762213 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-16 00:40:49.762231 | orchestrator | Tuesday 16 September 2025 00:40:44 +0000 (0:00:00.174) 0:00:08.956 ***** 2025-09-16 00:40:49.762243 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-16 00:40:49.762256 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-16 00:40:49.762268 | orchestrator | 2025-09-16 00:40:49.762281 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-16 00:40:49.762294 | orchestrator | Tuesday 16 September 2025 00:40:44 +0000 (0:00:00.158) 0:00:09.114 ***** 2025-09-16 00:40:49.762325 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.762337 | orchestrator | 2025-09-16 00:40:49.762350 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-16 00:40:49.762363 | orchestrator | Tuesday 16 September 2025 00:40:44 +0000 (0:00:00.114) 0:00:09.229 ***** 2025-09-16 00:40:49.762375 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.762388 | orchestrator | 2025-09-16 00:40:49.762400 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-16 00:40:49.762412 | orchestrator | Tuesday 16 September 2025 00:40:44 +0000 (0:00:00.099) 0:00:09.328 ***** 2025-09-16 00:40:49.762425 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.762457 | orchestrator | 2025-09-16 00:40:49.762470 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-16 00:40:49.762482 | orchestrator | Tuesday 16 September 2025 00:40:44 +0000 (0:00:00.117) 0:00:09.446 ***** 2025-09-16 00:40:49.762495 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:40:49.762510 | orchestrator | 2025-09-16 00:40:49.762529 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-16 00:40:49.762549 | orchestrator | Tuesday 16 September 2025 00:40:45 +0000 (0:00:00.121) 0:00:09.567 ***** 2025-09-16 00:40:49.762569 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8832b43a-4370-5f7f-b8ca-e1ef860202d6'}}) 2025-09-16 00:40:49.762589 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b409e677-b998-57d2-be40-43b65c9fb72d'}}) 2025-09-16 00:40:49.762600 | orchestrator | 2025-09-16 00:40:49.762611 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-16 00:40:49.762622 | orchestrator | Tuesday 16 September 2025 00:40:45 +0000 (0:00:00.158) 0:00:09.726 ***** 2025-09-16 00:40:49.762634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8832b43a-4370-5f7f-b8ca-e1ef860202d6'}})  2025-09-16 00:40:49.762654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b409e677-b998-57d2-be40-43b65c9fb72d'}})  2025-09-16 00:40:49.762666 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.762677 | orchestrator | 2025-09-16 00:40:49.762688 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-16 00:40:49.762699 | orchestrator | Tuesday 16 September 2025 00:40:45 +0000 (0:00:00.128) 0:00:09.854 ***** 2025-09-16 00:40:49.762709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8832b43a-4370-5f7f-b8ca-e1ef860202d6'}})  2025-09-16 00:40:49.762720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b409e677-b998-57d2-be40-43b65c9fb72d'}})  2025-09-16 00:40:49.762731 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.762742 | orchestrator | 2025-09-16 00:40:49.762753 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-16 00:40:49.762764 | orchestrator | Tuesday 16 September 2025 00:40:45 +0000 (0:00:00.253) 0:00:10.108 ***** 2025-09-16 00:40:49.762818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8832b43a-4370-5f7f-b8ca-e1ef860202d6'}})  2025-09-16 00:40:49.762839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b409e677-b998-57d2-be40-43b65c9fb72d'}})  2025-09-16 00:40:49.762858 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.762876 | orchestrator | 2025-09-16 00:40:49.762912 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-16 00:40:49.762924 | orchestrator | Tuesday 16 September 2025 00:40:45 +0000 (0:00:00.132) 0:00:10.240 ***** 2025-09-16 00:40:49.762935 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:40:49.762946 | orchestrator | 2025-09-16 00:40:49.762965 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-16 00:40:49.762977 | orchestrator | Tuesday 16 September 2025 00:40:45 +0000 (0:00:00.130) 0:00:10.370 ***** 2025-09-16 00:40:49.762987 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:40:49.762998 | orchestrator | 2025-09-16 00:40:49.763009 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-16 00:40:49.763019 | orchestrator | Tuesday 16 September 2025 00:40:46 +0000 (0:00:00.127) 0:00:10.498 ***** 2025-09-16 00:40:49.763030 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.763041 | orchestrator | 2025-09-16 00:40:49.763051 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-16 00:40:49.763062 | orchestrator | Tuesday 16 September 2025 00:40:46 +0000 (0:00:00.127) 0:00:10.626 ***** 2025-09-16 00:40:49.763072 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.763083 | orchestrator | 2025-09-16 00:40:49.763103 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-16 00:40:49.763115 | orchestrator | Tuesday 16 September 2025 00:40:46 +0000 (0:00:00.134) 0:00:10.760 ***** 2025-09-16 00:40:49.763125 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.763136 | orchestrator | 2025-09-16 00:40:49.763146 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-16 00:40:49.763157 | orchestrator | Tuesday 16 September 2025 00:40:46 +0000 (0:00:00.104) 0:00:10.864 ***** 2025-09-16 00:40:49.763168 | orchestrator | ok: [testbed-node-3] => { 2025-09-16 00:40:49.763178 | orchestrator |  "ceph_osd_devices": { 2025-09-16 00:40:49.763190 | orchestrator |  "sdb": { 2025-09-16 00:40:49.763201 | orchestrator |  "osd_lvm_uuid": "8832b43a-4370-5f7f-b8ca-e1ef860202d6" 2025-09-16 00:40:49.763213 | orchestrator |  }, 2025-09-16 00:40:49.763224 | orchestrator |  "sdc": { 2025-09-16 00:40:49.763235 | orchestrator |  "osd_lvm_uuid": "b409e677-b998-57d2-be40-43b65c9fb72d" 2025-09-16 00:40:49.763246 | orchestrator |  } 2025-09-16 00:40:49.763256 | orchestrator |  } 2025-09-16 00:40:49.763267 | orchestrator | } 2025-09-16 00:40:49.763278 | orchestrator | 2025-09-16 00:40:49.763289 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-16 00:40:49.763300 | orchestrator | Tuesday 16 September 2025 00:40:46 +0000 (0:00:00.143) 0:00:11.008 ***** 2025-09-16 00:40:49.763310 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.763321 | orchestrator | 2025-09-16 00:40:49.763331 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-16 00:40:49.763342 | orchestrator | Tuesday 16 September 2025 00:40:46 +0000 (0:00:00.140) 0:00:11.148 ***** 2025-09-16 00:40:49.763353 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.763363 | orchestrator | 2025-09-16 00:40:49.763374 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-16 00:40:49.763384 | orchestrator | Tuesday 16 September 2025 00:40:46 +0000 (0:00:00.127) 0:00:11.276 ***** 2025-09-16 00:40:49.763395 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:40:49.763406 | orchestrator | 2025-09-16 00:40:49.763416 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-16 00:40:49.763427 | orchestrator | Tuesday 16 September 2025 00:40:46 +0000 (0:00:00.128) 0:00:11.404 ***** 2025-09-16 00:40:49.763437 | orchestrator | changed: [testbed-node-3] => { 2025-09-16 00:40:49.763448 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-16 00:40:49.763459 | orchestrator |  "ceph_osd_devices": { 2025-09-16 00:40:49.763470 | orchestrator |  "sdb": { 2025-09-16 00:40:49.763481 | orchestrator |  "osd_lvm_uuid": "8832b43a-4370-5f7f-b8ca-e1ef860202d6" 2025-09-16 00:40:49.763492 | orchestrator |  }, 2025-09-16 00:40:49.763503 | orchestrator |  "sdc": { 2025-09-16 00:40:49.763513 | orchestrator |  "osd_lvm_uuid": "b409e677-b998-57d2-be40-43b65c9fb72d" 2025-09-16 00:40:49.763524 | orchestrator |  } 2025-09-16 00:40:49.763535 | orchestrator |  }, 2025-09-16 00:40:49.763546 | orchestrator |  "lvm_volumes": [ 2025-09-16 00:40:49.763557 | orchestrator |  { 2025-09-16 00:40:49.763568 | orchestrator |  "data": "osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6", 2025-09-16 00:40:49.763579 | orchestrator |  "data_vg": "ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6" 2025-09-16 00:40:49.763589 | orchestrator |  }, 2025-09-16 00:40:49.763600 | orchestrator |  { 2025-09-16 00:40:49.763611 | orchestrator |  "data": "osd-block-b409e677-b998-57d2-be40-43b65c9fb72d", 2025-09-16 00:40:49.763622 | orchestrator |  "data_vg": "ceph-b409e677-b998-57d2-be40-43b65c9fb72d" 2025-09-16 00:40:49.763633 | orchestrator |  } 2025-09-16 00:40:49.763643 | orchestrator |  ] 2025-09-16 00:40:49.763654 | orchestrator |  } 2025-09-16 00:40:49.763665 | orchestrator | } 2025-09-16 00:40:49.763676 | orchestrator | 2025-09-16 00:40:49.763692 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-16 00:40:49.763709 | orchestrator | Tuesday 16 September 2025 00:40:47 +0000 (0:00:00.208) 0:00:11.612 ***** 2025-09-16 00:40:49.763720 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-16 00:40:49.763731 | orchestrator | 2025-09-16 00:40:49.763742 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-16 00:40:49.763753 | orchestrator | 2025-09-16 00:40:49.763763 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-16 00:40:49.763800 | orchestrator | Tuesday 16 September 2025 00:40:49 +0000 (0:00:02.147) 0:00:13.760 ***** 2025-09-16 00:40:49.763812 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-16 00:40:49.763823 | orchestrator | 2025-09-16 00:40:49.763834 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-16 00:40:49.763844 | orchestrator | Tuesday 16 September 2025 00:40:49 +0000 (0:00:00.236) 0:00:13.996 ***** 2025-09-16 00:40:49.763855 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:40:49.763866 | orchestrator | 2025-09-16 00:40:49.763878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:49.763907 | orchestrator | Tuesday 16 September 2025 00:40:49 +0000 (0:00:00.211) 0:00:14.207 ***** 2025-09-16 00:40:56.719152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-16 00:40:56.719260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-16 00:40:56.719276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-16 00:40:56.719288 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-16 00:40:56.719300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-16 00:40:56.719311 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-16 00:40:56.719322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-16 00:40:56.719333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-16 00:40:56.719344 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-16 00:40:56.719354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-16 00:40:56.719365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-16 00:40:56.719376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-16 00:40:56.719386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-16 00:40:56.719402 | orchestrator | 2025-09-16 00:40:56.719414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:56.719426 | orchestrator | Tuesday 16 September 2025 00:40:50 +0000 (0:00:00.388) 0:00:14.596 ***** 2025-09-16 00:40:56.719437 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.719449 | orchestrator | 2025-09-16 00:40:56.719460 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:56.719471 | orchestrator | Tuesday 16 September 2025 00:40:50 +0000 (0:00:00.189) 0:00:14.785 ***** 2025-09-16 00:40:56.719481 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.719492 | orchestrator | 2025-09-16 00:40:56.719503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:56.719514 | orchestrator | Tuesday 16 September 2025 00:40:50 +0000 (0:00:00.196) 0:00:14.982 ***** 2025-09-16 00:40:56.719525 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.719535 | orchestrator | 2025-09-16 00:40:56.719546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:56.719557 | orchestrator | Tuesday 16 September 2025 00:40:50 +0000 (0:00:00.206) 0:00:15.189 ***** 2025-09-16 00:40:56.719568 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.719601 | orchestrator | 2025-09-16 00:40:56.719613 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:56.719624 | orchestrator | Tuesday 16 September 2025 00:40:50 +0000 (0:00:00.187) 0:00:15.376 ***** 2025-09-16 00:40:56.719635 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.719645 | orchestrator | 2025-09-16 00:40:56.719656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:56.719667 | orchestrator | Tuesday 16 September 2025 00:40:51 +0000 (0:00:00.563) 0:00:15.939 ***** 2025-09-16 00:40:56.719677 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.719689 | orchestrator | 2025-09-16 00:40:56.719702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:56.719714 | orchestrator | Tuesday 16 September 2025 00:40:51 +0000 (0:00:00.179) 0:00:16.119 ***** 2025-09-16 00:40:56.719742 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.719756 | orchestrator | 2025-09-16 00:40:56.719768 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:56.719808 | orchestrator | Tuesday 16 September 2025 00:40:51 +0000 (0:00:00.199) 0:00:16.319 ***** 2025-09-16 00:40:56.719821 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.719833 | orchestrator | 2025-09-16 00:40:56.719846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:56.719858 | orchestrator | Tuesday 16 September 2025 00:40:52 +0000 (0:00:00.189) 0:00:16.508 ***** 2025-09-16 00:40:56.719871 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370) 2025-09-16 00:40:56.719884 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370) 2025-09-16 00:40:56.719896 | orchestrator | 2025-09-16 00:40:56.719909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:56.719921 | orchestrator | Tuesday 16 September 2025 00:40:52 +0000 (0:00:00.407) 0:00:16.916 ***** 2025-09-16 00:40:56.719934 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5c63af0b-1be6-4a9c-8f35-a4445080f1db) 2025-09-16 00:40:56.719947 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5c63af0b-1be6-4a9c-8f35-a4445080f1db) 2025-09-16 00:40:56.719959 | orchestrator | 2025-09-16 00:40:56.719971 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:56.719984 | orchestrator | Tuesday 16 September 2025 00:40:52 +0000 (0:00:00.397) 0:00:17.313 ***** 2025-09-16 00:40:56.719996 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_da9e83cb-2e5e-4388-ad73-1879a24665a3) 2025-09-16 00:40:56.720009 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_da9e83cb-2e5e-4388-ad73-1879a24665a3) 2025-09-16 00:40:56.720022 | orchestrator | 2025-09-16 00:40:56.720034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:56.720046 | orchestrator | Tuesday 16 September 2025 00:40:53 +0000 (0:00:00.420) 0:00:17.733 ***** 2025-09-16 00:40:56.720075 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_46481bd5-1fc4-4619-9f81-82a2d5c944be) 2025-09-16 00:40:56.720087 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_46481bd5-1fc4-4619-9f81-82a2d5c944be) 2025-09-16 00:40:56.720098 | orchestrator | 2025-09-16 00:40:56.720108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:40:56.720119 | orchestrator | Tuesday 16 September 2025 00:40:53 +0000 (0:00:00.378) 0:00:18.112 ***** 2025-09-16 00:40:56.720130 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-16 00:40:56.720140 | orchestrator | 2025-09-16 00:40:56.720151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:56.720162 | orchestrator | Tuesday 16 September 2025 00:40:53 +0000 (0:00:00.288) 0:00:18.400 ***** 2025-09-16 00:40:56.720172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-16 00:40:56.720193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-16 00:40:56.720204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-16 00:40:56.720215 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-16 00:40:56.720225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-16 00:40:56.720236 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-16 00:40:56.720246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-16 00:40:56.720257 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-16 00:40:56.720267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-16 00:40:56.720278 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-16 00:40:56.720288 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-16 00:40:56.720299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-16 00:40:56.720309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-16 00:40:56.720320 | orchestrator | 2025-09-16 00:40:56.720331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:56.720341 | orchestrator | Tuesday 16 September 2025 00:40:54 +0000 (0:00:00.321) 0:00:18.721 ***** 2025-09-16 00:40:56.720352 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.720363 | orchestrator | 2025-09-16 00:40:56.720373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:56.720384 | orchestrator | Tuesday 16 September 2025 00:40:54 +0000 (0:00:00.178) 0:00:18.900 ***** 2025-09-16 00:40:56.720395 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.720405 | orchestrator | 2025-09-16 00:40:56.720422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:56.720434 | orchestrator | Tuesday 16 September 2025 00:40:54 +0000 (0:00:00.477) 0:00:19.378 ***** 2025-09-16 00:40:56.720444 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.720455 | orchestrator | 2025-09-16 00:40:56.720466 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:56.720476 | orchestrator | Tuesday 16 September 2025 00:40:55 +0000 (0:00:00.174) 0:00:19.552 ***** 2025-09-16 00:40:56.720487 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.720498 | orchestrator | 2025-09-16 00:40:56.720509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:56.720519 | orchestrator | Tuesday 16 September 2025 00:40:55 +0000 (0:00:00.168) 0:00:19.721 ***** 2025-09-16 00:40:56.720530 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.720541 | orchestrator | 2025-09-16 00:40:56.720551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:56.720562 | orchestrator | Tuesday 16 September 2025 00:40:55 +0000 (0:00:00.171) 0:00:19.893 ***** 2025-09-16 00:40:56.720573 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.720583 | orchestrator | 2025-09-16 00:40:56.720594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:56.720605 | orchestrator | Tuesday 16 September 2025 00:40:55 +0000 (0:00:00.183) 0:00:20.077 ***** 2025-09-16 00:40:56.720615 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.720626 | orchestrator | 2025-09-16 00:40:56.720637 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:56.720647 | orchestrator | Tuesday 16 September 2025 00:40:55 +0000 (0:00:00.173) 0:00:20.250 ***** 2025-09-16 00:40:56.720658 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.720669 | orchestrator | 2025-09-16 00:40:56.720679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:56.720698 | orchestrator | Tuesday 16 September 2025 00:40:55 +0000 (0:00:00.171) 0:00:20.422 ***** 2025-09-16 00:40:56.720709 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-16 00:40:56.720720 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-16 00:40:56.720732 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-16 00:40:56.720743 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-16 00:40:56.720753 | orchestrator | 2025-09-16 00:40:56.720764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:40:56.720803 | orchestrator | Tuesday 16 September 2025 00:40:56 +0000 (0:00:00.561) 0:00:20.983 ***** 2025-09-16 00:40:56.720814 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:40:56.720825 | orchestrator | 2025-09-16 00:40:56.720842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:01.978538 | orchestrator | Tuesday 16 September 2025 00:40:56 +0000 (0:00:00.183) 0:00:21.167 ***** 2025-09-16 00:41:01.978639 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:41:01.978656 | orchestrator | 2025-09-16 00:41:01.978669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:01.978680 | orchestrator | Tuesday 16 September 2025 00:40:56 +0000 (0:00:00.173) 0:00:21.340 ***** 2025-09-16 00:41:01.978691 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:41:01.978702 | orchestrator | 2025-09-16 00:41:01.978713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:01.978725 | orchestrator | Tuesday 16 September 2025 00:40:57 +0000 (0:00:00.176) 0:00:21.517 ***** 2025-09-16 00:41:01.978735 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:41:01.978746 | orchestrator | 2025-09-16 00:41:01.978757 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-16 00:41:01.978768 | orchestrator | Tuesday 16 September 2025 00:40:57 +0000 (0:00:00.161) 0:00:21.679 ***** 2025-09-16 00:41:01.978811 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-16 00:41:01.978822 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-16 00:41:01.978833 | orchestrator | 2025-09-16 00:41:01.978844 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-16 00:41:01.978855 | orchestrator | Tuesday 16 September 2025 00:40:57 +0000 (0:00:00.247) 0:00:21.927 ***** 2025-09-16 00:41:01.978866 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:41:01.978877 | orchestrator | 2025-09-16 00:41:01.978888 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-16 00:41:01.978899 | orchestrator | Tuesday 16 September 2025 00:40:57 +0000 (0:00:00.115) 0:00:22.042 ***** 2025-09-16 00:41:01.978911 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:41:01.978922 | orchestrator | 2025-09-16 00:41:01.978933 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-16 00:41:01.978944 | orchestrator | Tuesday 16 September 2025 00:40:57 +0000 (0:00:00.142) 0:00:22.185 ***** 2025-09-16 00:41:01.978954 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:41:01.978965 | orchestrator | 2025-09-16 00:41:01.978976 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-16 00:41:01.978987 | orchestrator | Tuesday 16 September 2025 00:40:57 +0000 (0:00:00.128) 0:00:22.314 ***** 2025-09-16 00:41:01.978998 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:41:01.979010 | orchestrator | 2025-09-16 00:41:01.979022 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-16 00:41:01.979033 | orchestrator | Tuesday 16 September 2025 00:40:57 +0000 (0:00:00.134) 0:00:22.448 ***** 2025-09-16 00:41:01.979044 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a154e298-15cb-5d50-9a1c-17bc1371db7e'}}) 2025-09-16 00:41:01.979056 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '56010334-63d7-5603-a2fe-432c47d6dcb8'}}) 2025-09-16 00:41:01.979067 | orchestrator | 2025-09-16 00:41:01.979081 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-16 00:41:01.979128 | orchestrator | Tuesday 16 September 2025 00:40:58 +0000 (0:00:00.136) 0:00:22.585 ***** 2025-09-16 00:41:01.979142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a154e298-15cb-5d50-9a1c-17bc1371db7e'}})  2025-09-16 00:41:01.979157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '56010334-63d7-5603-a2fe-432c47d6dcb8'}})  2025-09-16 00:41:01.979169 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:41:01.979182 | orchestrator | 2025-09-16 00:41:01.979211 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-16 00:41:01.979224 | orchestrator | Tuesday 16 September 2025 00:40:58 +0000 (0:00:00.135) 0:00:22.720 ***** 2025-09-16 00:41:01.979237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a154e298-15cb-5d50-9a1c-17bc1371db7e'}})  2025-09-16 00:41:01.979250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '56010334-63d7-5603-a2fe-432c47d6dcb8'}})  2025-09-16 00:41:01.979263 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:41:01.979275 | orchestrator | 2025-09-16 00:41:01.979286 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-16 00:41:01.979297 | orchestrator | Tuesday 16 September 2025 00:40:58 +0000 (0:00:00.213) 0:00:22.934 ***** 2025-09-16 00:41:01.979307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a154e298-15cb-5d50-9a1c-17bc1371db7e'}})  2025-09-16 00:41:01.979319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '56010334-63d7-5603-a2fe-432c47d6dcb8'}})  2025-09-16 00:41:01.979331 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:41:01.979342 | orchestrator | 2025-09-16 00:41:01.979353 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-16 00:41:01.979364 | orchestrator | Tuesday 16 September 2025 00:40:58 +0000 (0:00:00.134) 0:00:23.069 ***** 2025-09-16 00:41:01.979375 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:41:01.979386 | orchestrator | 2025-09-16 00:41:01.979397 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-16 00:41:01.979408 | orchestrator | Tuesday 16 September 2025 00:40:58 +0000 (0:00:00.124) 0:00:23.193 ***** 2025-09-16 00:41:01.979418 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:41:01.979429 | orchestrator | 2025-09-16 00:41:01.979440 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-16 00:41:01.979451 | orchestrator | Tuesday 16 September 2025 00:40:58 +0000 (0:00:00.115) 0:00:23.309 ***** 2025-09-16 00:41:01.979462 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:41:01.979473 | orchestrator | 2025-09-16 00:41:01.979503 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-16 00:41:01.979515 | orchestrator | Tuesday 16 September 2025 00:40:58 +0000 (0:00:00.117) 0:00:23.426 ***** 2025-09-16 00:41:01.979525 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:41:01.979536 | orchestrator | 2025-09-16 00:41:01.979547 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-16 00:41:01.979558 | orchestrator | Tuesday 16 September 2025 00:40:59 +0000 (0:00:00.258) 0:00:23.685 ***** 2025-09-16 00:41:01.979569 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:41:01.979580 | orchestrator | 2025-09-16 00:41:01.979591 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-16 00:41:01.979602 | orchestrator | Tuesday 16 September 2025 00:40:59 +0000 (0:00:00.115) 0:00:23.800 ***** 2025-09-16 00:41:01.979613 | orchestrator | ok: [testbed-node-4] => { 2025-09-16 00:41:01.979624 | orchestrator |  "ceph_osd_devices": { 2025-09-16 00:41:01.979635 | orchestrator |  "sdb": { 2025-09-16 00:41:01.979647 | orchestrator |  "osd_lvm_uuid": "a154e298-15cb-5d50-9a1c-17bc1371db7e" 2025-09-16 00:41:01.979658 | orchestrator |  }, 2025-09-16 00:41:01.979670 | orchestrator |  "sdc": { 2025-09-16 00:41:01.979691 | orchestrator |  "osd_lvm_uuid": "56010334-63d7-5603-a2fe-432c47d6dcb8" 2025-09-16 00:41:01.979701 | orchestrator |  } 2025-09-16 00:41:01.979712 | orchestrator |  } 2025-09-16 00:41:01.979724 | orchestrator | } 2025-09-16 00:41:01.979735 | orchestrator | 2025-09-16 00:41:01.979746 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-16 00:41:01.979757 | orchestrator | Tuesday 16 September 2025 00:40:59 +0000 (0:00:00.110) 0:00:23.911 ***** 2025-09-16 00:41:01.979767 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:41:01.979795 | orchestrator | 2025-09-16 00:41:01.979806 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-16 00:41:01.979817 | orchestrator | Tuesday 16 September 2025 00:40:59 +0000 (0:00:00.103) 0:00:24.014 ***** 2025-09-16 00:41:01.979828 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:41:01.979838 | orchestrator | 2025-09-16 00:41:01.979849 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-16 00:41:01.979860 | orchestrator | Tuesday 16 September 2025 00:40:59 +0000 (0:00:00.102) 0:00:24.117 ***** 2025-09-16 00:41:01.979871 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:41:01.979881 | orchestrator | 2025-09-16 00:41:01.979892 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-16 00:41:01.979903 | orchestrator | Tuesday 16 September 2025 00:40:59 +0000 (0:00:00.104) 0:00:24.221 ***** 2025-09-16 00:41:01.979914 | orchestrator | changed: [testbed-node-4] => { 2025-09-16 00:41:01.979925 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-16 00:41:01.979936 | orchestrator |  "ceph_osd_devices": { 2025-09-16 00:41:01.979947 | orchestrator |  "sdb": { 2025-09-16 00:41:01.979958 | orchestrator |  "osd_lvm_uuid": "a154e298-15cb-5d50-9a1c-17bc1371db7e" 2025-09-16 00:41:01.979969 | orchestrator |  }, 2025-09-16 00:41:01.979980 | orchestrator |  "sdc": { 2025-09-16 00:41:01.979991 | orchestrator |  "osd_lvm_uuid": "56010334-63d7-5603-a2fe-432c47d6dcb8" 2025-09-16 00:41:01.980002 | orchestrator |  } 2025-09-16 00:41:01.980013 | orchestrator |  }, 2025-09-16 00:41:01.980024 | orchestrator |  "lvm_volumes": [ 2025-09-16 00:41:01.980035 | orchestrator |  { 2025-09-16 00:41:01.980046 | orchestrator |  "data": "osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e", 2025-09-16 00:41:01.980057 | orchestrator |  "data_vg": "ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e" 2025-09-16 00:41:01.980068 | orchestrator |  }, 2025-09-16 00:41:01.980079 | orchestrator |  { 2025-09-16 00:41:01.980090 | orchestrator |  "data": "osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8", 2025-09-16 00:41:01.980101 | orchestrator |  "data_vg": "ceph-56010334-63d7-5603-a2fe-432c47d6dcb8" 2025-09-16 00:41:01.980111 | orchestrator |  } 2025-09-16 00:41:01.980122 | orchestrator |  ] 2025-09-16 00:41:01.980133 | orchestrator |  } 2025-09-16 00:41:01.980144 | orchestrator | } 2025-09-16 00:41:01.980155 | orchestrator | 2025-09-16 00:41:01.980166 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-16 00:41:01.980177 | orchestrator | Tuesday 16 September 2025 00:40:59 +0000 (0:00:00.167) 0:00:24.389 ***** 2025-09-16 00:41:01.980188 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-16 00:41:01.980198 | orchestrator | 2025-09-16 00:41:01.980209 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-16 00:41:01.980220 | orchestrator | 2025-09-16 00:41:01.980231 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-16 00:41:01.980242 | orchestrator | Tuesday 16 September 2025 00:41:00 +0000 (0:00:00.856) 0:00:25.245 ***** 2025-09-16 00:41:01.980252 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-16 00:41:01.980263 | orchestrator | 2025-09-16 00:41:01.980274 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-16 00:41:01.980285 | orchestrator | Tuesday 16 September 2025 00:41:01 +0000 (0:00:00.340) 0:00:25.586 ***** 2025-09-16 00:41:01.980302 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:41:01.980313 | orchestrator | 2025-09-16 00:41:01.980330 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:41:01.980341 | orchestrator | Tuesday 16 September 2025 00:41:01 +0000 (0:00:00.488) 0:00:26.074 ***** 2025-09-16 00:41:01.980352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-16 00:41:01.980363 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-16 00:41:01.980374 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-16 00:41:01.980384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-16 00:41:01.980395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-16 00:41:01.980406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-16 00:41:01.980424 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-16 00:41:08.971192 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-16 00:41:08.971303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-16 00:41:08.971318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-16 00:41:08.971330 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-16 00:41:08.971342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-16 00:41:08.971353 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-16 00:41:08.971364 | orchestrator | 2025-09-16 00:41:08.971376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:41:08.971388 | orchestrator | Tuesday 16 September 2025 00:41:01 +0000 (0:00:00.345) 0:00:26.420 ***** 2025-09-16 00:41:08.971399 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.971411 | orchestrator | 2025-09-16 00:41:08.971422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:41:08.971433 | orchestrator | Tuesday 16 September 2025 00:41:02 +0000 (0:00:00.167) 0:00:26.587 ***** 2025-09-16 00:41:08.971444 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.971455 | orchestrator | 2025-09-16 00:41:08.971466 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:41:08.971476 | orchestrator | Tuesday 16 September 2025 00:41:02 +0000 (0:00:00.151) 0:00:26.738 ***** 2025-09-16 00:41:08.971487 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.971498 | orchestrator | 2025-09-16 00:41:08.971509 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:41:08.971520 | orchestrator | Tuesday 16 September 2025 00:41:02 +0000 (0:00:00.154) 0:00:26.893 ***** 2025-09-16 00:41:08.971530 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.971541 | orchestrator | 2025-09-16 00:41:08.971552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:41:08.971563 | orchestrator | Tuesday 16 September 2025 00:41:02 +0000 (0:00:00.163) 0:00:27.056 ***** 2025-09-16 00:41:08.971574 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.971584 | orchestrator | 2025-09-16 00:41:08.971595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:41:08.971606 | orchestrator | Tuesday 16 September 2025 00:41:02 +0000 (0:00:00.167) 0:00:27.223 ***** 2025-09-16 00:41:08.971616 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.971627 | orchestrator | 2025-09-16 00:41:08.971638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:41:08.971649 | orchestrator | Tuesday 16 September 2025 00:41:02 +0000 (0:00:00.177) 0:00:27.400 ***** 2025-09-16 00:41:08.971660 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.971692 | orchestrator | 2025-09-16 00:41:08.971703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:41:08.971714 | orchestrator | Tuesday 16 September 2025 00:41:03 +0000 (0:00:00.160) 0:00:27.561 ***** 2025-09-16 00:41:08.971725 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.971738 | orchestrator | 2025-09-16 00:41:08.971750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:41:08.971763 | orchestrator | Tuesday 16 September 2025 00:41:03 +0000 (0:00:00.147) 0:00:27.709 ***** 2025-09-16 00:41:08.971807 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595) 2025-09-16 00:41:08.971821 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595) 2025-09-16 00:41:08.971833 | orchestrator | 2025-09-16 00:41:08.971847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:41:08.971859 | orchestrator | Tuesday 16 September 2025 00:41:03 +0000 (0:00:00.503) 0:00:28.212 ***** 2025-09-16 00:41:08.971872 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a99d92e2-a7d0-4115-a3b5-db7bfa0170a9) 2025-09-16 00:41:08.971884 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a99d92e2-a7d0-4115-a3b5-db7bfa0170a9) 2025-09-16 00:41:08.971896 | orchestrator | 2025-09-16 00:41:08.971908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:41:08.971921 | orchestrator | Tuesday 16 September 2025 00:41:04 +0000 (0:00:00.681) 0:00:28.894 ***** 2025-09-16 00:41:08.971935 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f8c86b93-6440-4cc6-ba3c-00ae05f2a443) 2025-09-16 00:41:08.971947 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f8c86b93-6440-4cc6-ba3c-00ae05f2a443) 2025-09-16 00:41:08.971960 | orchestrator | 2025-09-16 00:41:08.971972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:41:08.971985 | orchestrator | Tuesday 16 September 2025 00:41:04 +0000 (0:00:00.376) 0:00:29.270 ***** 2025-09-16 00:41:08.971998 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ad9de541-7002-4a51-9253-a212a9f46ca2) 2025-09-16 00:41:08.972010 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ad9de541-7002-4a51-9253-a212a9f46ca2) 2025-09-16 00:41:08.972023 | orchestrator | 2025-09-16 00:41:08.972035 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:41:08.972047 | orchestrator | Tuesday 16 September 2025 00:41:05 +0000 (0:00:00.413) 0:00:29.684 ***** 2025-09-16 00:41:08.972060 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-16 00:41:08.972072 | orchestrator | 2025-09-16 00:41:08.972085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:08.972096 | orchestrator | Tuesday 16 September 2025 00:41:05 +0000 (0:00:00.277) 0:00:29.962 ***** 2025-09-16 00:41:08.972125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-16 00:41:08.972136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-16 00:41:08.972147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-16 00:41:08.972158 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-16 00:41:08.972169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-16 00:41:08.972179 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-16 00:41:08.972208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-16 00:41:08.972220 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-16 00:41:08.972231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-16 00:41:08.972251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-16 00:41:08.972262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-16 00:41:08.972273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-16 00:41:08.972284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-16 00:41:08.972294 | orchestrator | 2025-09-16 00:41:08.972305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:08.972316 | orchestrator | Tuesday 16 September 2025 00:41:05 +0000 (0:00:00.344) 0:00:30.307 ***** 2025-09-16 00:41:08.972327 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.972337 | orchestrator | 2025-09-16 00:41:08.972348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:08.972359 | orchestrator | Tuesday 16 September 2025 00:41:06 +0000 (0:00:00.186) 0:00:30.494 ***** 2025-09-16 00:41:08.972370 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.972381 | orchestrator | 2025-09-16 00:41:08.972391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:08.972402 | orchestrator | Tuesday 16 September 2025 00:41:06 +0000 (0:00:00.171) 0:00:30.666 ***** 2025-09-16 00:41:08.972413 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.972423 | orchestrator | 2025-09-16 00:41:08.972439 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:08.972450 | orchestrator | Tuesday 16 September 2025 00:41:06 +0000 (0:00:00.169) 0:00:30.836 ***** 2025-09-16 00:41:08.972461 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.972472 | orchestrator | 2025-09-16 00:41:08.972483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:08.972493 | orchestrator | Tuesday 16 September 2025 00:41:06 +0000 (0:00:00.186) 0:00:31.022 ***** 2025-09-16 00:41:08.972504 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.972515 | orchestrator | 2025-09-16 00:41:08.972526 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:08.972537 | orchestrator | Tuesday 16 September 2025 00:41:06 +0000 (0:00:00.183) 0:00:31.205 ***** 2025-09-16 00:41:08.972548 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.972558 | orchestrator | 2025-09-16 00:41:08.972569 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:08.972580 | orchestrator | Tuesday 16 September 2025 00:41:07 +0000 (0:00:00.472) 0:00:31.678 ***** 2025-09-16 00:41:08.972590 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.972601 | orchestrator | 2025-09-16 00:41:08.972612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:08.972622 | orchestrator | Tuesday 16 September 2025 00:41:07 +0000 (0:00:00.191) 0:00:31.870 ***** 2025-09-16 00:41:08.972633 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.972644 | orchestrator | 2025-09-16 00:41:08.972655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:08.972665 | orchestrator | Tuesday 16 September 2025 00:41:07 +0000 (0:00:00.174) 0:00:32.044 ***** 2025-09-16 00:41:08.972676 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-16 00:41:08.972687 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-16 00:41:08.972698 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-16 00:41:08.972709 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-16 00:41:08.972720 | orchestrator | 2025-09-16 00:41:08.972731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:08.972741 | orchestrator | Tuesday 16 September 2025 00:41:08 +0000 (0:00:00.591) 0:00:32.636 ***** 2025-09-16 00:41:08.972752 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.972763 | orchestrator | 2025-09-16 00:41:08.972794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:08.972812 | orchestrator | Tuesday 16 September 2025 00:41:08 +0000 (0:00:00.174) 0:00:32.811 ***** 2025-09-16 00:41:08.972823 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.972834 | orchestrator | 2025-09-16 00:41:08.972844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:08.972855 | orchestrator | Tuesday 16 September 2025 00:41:08 +0000 (0:00:00.196) 0:00:33.008 ***** 2025-09-16 00:41:08.972866 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.972877 | orchestrator | 2025-09-16 00:41:08.972888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:41:08.972898 | orchestrator | Tuesday 16 September 2025 00:41:08 +0000 (0:00:00.216) 0:00:33.225 ***** 2025-09-16 00:41:08.972909 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:08.972920 | orchestrator | 2025-09-16 00:41:08.972931 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-16 00:41:08.972948 | orchestrator | Tuesday 16 September 2025 00:41:08 +0000 (0:00:00.194) 0:00:33.419 ***** 2025-09-16 00:41:12.644110 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-16 00:41:12.644214 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-16 00:41:12.644230 | orchestrator | 2025-09-16 00:41:12.644244 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-16 00:41:12.644255 | orchestrator | Tuesday 16 September 2025 00:41:09 +0000 (0:00:00.158) 0:00:33.578 ***** 2025-09-16 00:41:12.644266 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:12.644278 | orchestrator | 2025-09-16 00:41:12.644289 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-16 00:41:12.644300 | orchestrator | Tuesday 16 September 2025 00:41:09 +0000 (0:00:00.139) 0:00:33.718 ***** 2025-09-16 00:41:12.644311 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:12.644322 | orchestrator | 2025-09-16 00:41:12.644333 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-16 00:41:12.644344 | orchestrator | Tuesday 16 September 2025 00:41:09 +0000 (0:00:00.136) 0:00:33.855 ***** 2025-09-16 00:41:12.644355 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:12.644366 | orchestrator | 2025-09-16 00:41:12.644376 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-16 00:41:12.644387 | orchestrator | Tuesday 16 September 2025 00:41:09 +0000 (0:00:00.125) 0:00:33.980 ***** 2025-09-16 00:41:12.644398 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:41:12.644410 | orchestrator | 2025-09-16 00:41:12.644421 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-16 00:41:12.644432 | orchestrator | Tuesday 16 September 2025 00:41:09 +0000 (0:00:00.301) 0:00:34.281 ***** 2025-09-16 00:41:12.644444 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '457b984f-2001-5589-9984-9a697803acd2'}}) 2025-09-16 00:41:12.644455 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd2877fc6-62dc-51ad-b157-4c09a4f274b5'}}) 2025-09-16 00:41:12.644466 | orchestrator | 2025-09-16 00:41:12.644477 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-16 00:41:12.644488 | orchestrator | Tuesday 16 September 2025 00:41:09 +0000 (0:00:00.157) 0:00:34.438 ***** 2025-09-16 00:41:12.644499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '457b984f-2001-5589-9984-9a697803acd2'}})  2025-09-16 00:41:12.644512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd2877fc6-62dc-51ad-b157-4c09a4f274b5'}})  2025-09-16 00:41:12.644523 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:12.644534 | orchestrator | 2025-09-16 00:41:12.644545 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-16 00:41:12.644556 | orchestrator | Tuesday 16 September 2025 00:41:10 +0000 (0:00:00.112) 0:00:34.551 ***** 2025-09-16 00:41:12.644567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '457b984f-2001-5589-9984-9a697803acd2'}})  2025-09-16 00:41:12.644601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd2877fc6-62dc-51ad-b157-4c09a4f274b5'}})  2025-09-16 00:41:12.644613 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:12.644624 | orchestrator | 2025-09-16 00:41:12.644635 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-16 00:41:12.644648 | orchestrator | Tuesday 16 September 2025 00:41:10 +0000 (0:00:00.144) 0:00:34.696 ***** 2025-09-16 00:41:12.644660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '457b984f-2001-5589-9984-9a697803acd2'}})  2025-09-16 00:41:12.644690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd2877fc6-62dc-51ad-b157-4c09a4f274b5'}})  2025-09-16 00:41:12.644703 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:12.644716 | orchestrator | 2025-09-16 00:41:12.644728 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-16 00:41:12.644740 | orchestrator | Tuesday 16 September 2025 00:41:10 +0000 (0:00:00.133) 0:00:34.829 ***** 2025-09-16 00:41:12.644753 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:41:12.644765 | orchestrator | 2025-09-16 00:41:12.644811 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-16 00:41:12.644824 | orchestrator | Tuesday 16 September 2025 00:41:10 +0000 (0:00:00.131) 0:00:34.961 ***** 2025-09-16 00:41:12.644837 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:41:12.644849 | orchestrator | 2025-09-16 00:41:12.644861 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-16 00:41:12.644874 | orchestrator | Tuesday 16 September 2025 00:41:10 +0000 (0:00:00.132) 0:00:35.093 ***** 2025-09-16 00:41:12.644887 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:12.644900 | orchestrator | 2025-09-16 00:41:12.644912 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-16 00:41:12.644925 | orchestrator | Tuesday 16 September 2025 00:41:10 +0000 (0:00:00.123) 0:00:35.217 ***** 2025-09-16 00:41:12.644937 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:12.644950 | orchestrator | 2025-09-16 00:41:12.644963 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-16 00:41:12.644976 | orchestrator | Tuesday 16 September 2025 00:41:10 +0000 (0:00:00.125) 0:00:35.343 ***** 2025-09-16 00:41:12.644988 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:12.645001 | orchestrator | 2025-09-16 00:41:12.645013 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-16 00:41:12.645026 | orchestrator | Tuesday 16 September 2025 00:41:11 +0000 (0:00:00.122) 0:00:35.465 ***** 2025-09-16 00:41:12.645039 | orchestrator | ok: [testbed-node-5] => { 2025-09-16 00:41:12.645050 | orchestrator |  "ceph_osd_devices": { 2025-09-16 00:41:12.645061 | orchestrator |  "sdb": { 2025-09-16 00:41:12.645074 | orchestrator |  "osd_lvm_uuid": "457b984f-2001-5589-9984-9a697803acd2" 2025-09-16 00:41:12.645103 | orchestrator |  }, 2025-09-16 00:41:12.645114 | orchestrator |  "sdc": { 2025-09-16 00:41:12.645125 | orchestrator |  "osd_lvm_uuid": "d2877fc6-62dc-51ad-b157-4c09a4f274b5" 2025-09-16 00:41:12.645136 | orchestrator |  } 2025-09-16 00:41:12.645147 | orchestrator |  } 2025-09-16 00:41:12.645158 | orchestrator | } 2025-09-16 00:41:12.645170 | orchestrator | 2025-09-16 00:41:12.645181 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-16 00:41:12.645192 | orchestrator | Tuesday 16 September 2025 00:41:11 +0000 (0:00:00.119) 0:00:35.585 ***** 2025-09-16 00:41:12.645202 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:12.645213 | orchestrator | 2025-09-16 00:41:12.645224 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-16 00:41:12.645235 | orchestrator | Tuesday 16 September 2025 00:41:11 +0000 (0:00:00.129) 0:00:35.714 ***** 2025-09-16 00:41:12.645245 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:12.645256 | orchestrator | 2025-09-16 00:41:12.645267 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-16 00:41:12.645288 | orchestrator | Tuesday 16 September 2025 00:41:11 +0000 (0:00:00.295) 0:00:36.010 ***** 2025-09-16 00:41:12.645299 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:41:12.645310 | orchestrator | 2025-09-16 00:41:12.645320 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-16 00:41:12.645331 | orchestrator | Tuesday 16 September 2025 00:41:11 +0000 (0:00:00.103) 0:00:36.113 ***** 2025-09-16 00:41:12.645342 | orchestrator | changed: [testbed-node-5] => { 2025-09-16 00:41:12.645353 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-16 00:41:12.645364 | orchestrator |  "ceph_osd_devices": { 2025-09-16 00:41:12.645375 | orchestrator |  "sdb": { 2025-09-16 00:41:12.645386 | orchestrator |  "osd_lvm_uuid": "457b984f-2001-5589-9984-9a697803acd2" 2025-09-16 00:41:12.645397 | orchestrator |  }, 2025-09-16 00:41:12.645408 | orchestrator |  "sdc": { 2025-09-16 00:41:12.645419 | orchestrator |  "osd_lvm_uuid": "d2877fc6-62dc-51ad-b157-4c09a4f274b5" 2025-09-16 00:41:12.645430 | orchestrator |  } 2025-09-16 00:41:12.645440 | orchestrator |  }, 2025-09-16 00:41:12.645451 | orchestrator |  "lvm_volumes": [ 2025-09-16 00:41:12.645462 | orchestrator |  { 2025-09-16 00:41:12.645473 | orchestrator |  "data": "osd-block-457b984f-2001-5589-9984-9a697803acd2", 2025-09-16 00:41:12.645484 | orchestrator |  "data_vg": "ceph-457b984f-2001-5589-9984-9a697803acd2" 2025-09-16 00:41:12.645495 | orchestrator |  }, 2025-09-16 00:41:12.645505 | orchestrator |  { 2025-09-16 00:41:12.645516 | orchestrator |  "data": "osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5", 2025-09-16 00:41:12.645527 | orchestrator |  "data_vg": "ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5" 2025-09-16 00:41:12.645538 | orchestrator |  } 2025-09-16 00:41:12.645548 | orchestrator |  ] 2025-09-16 00:41:12.645559 | orchestrator |  } 2025-09-16 00:41:12.645574 | orchestrator | } 2025-09-16 00:41:12.645586 | orchestrator | 2025-09-16 00:41:12.645596 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-16 00:41:12.645607 | orchestrator | Tuesday 16 September 2025 00:41:11 +0000 (0:00:00.198) 0:00:36.312 ***** 2025-09-16 00:41:12.645618 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-16 00:41:12.645629 | orchestrator | 2025-09-16 00:41:12.645639 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:41:12.645650 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-16 00:41:12.645663 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-16 00:41:12.645674 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-16 00:41:12.645685 | orchestrator | 2025-09-16 00:41:12.645695 | orchestrator | 2025-09-16 00:41:12.645706 | orchestrator | 2025-09-16 00:41:12.645717 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:41:12.645728 | orchestrator | Tuesday 16 September 2025 00:41:12 +0000 (0:00:00.763) 0:00:37.076 ***** 2025-09-16 00:41:12.645739 | orchestrator | =============================================================================== 2025-09-16 00:41:12.645749 | orchestrator | Write configuration file ------------------------------------------------ 3.77s 2025-09-16 00:41:12.645760 | orchestrator | Add known links to the list of available block devices ------------------ 1.10s 2025-09-16 00:41:12.645794 | orchestrator | Add known partitions to the list of available block devices ------------- 1.01s 2025-09-16 00:41:12.645806 | orchestrator | Get initial list of available block devices ----------------------------- 0.93s 2025-09-16 00:41:12.645817 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.85s 2025-09-16 00:41:12.645835 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2025-09-16 00:41:12.645846 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-09-16 00:41:12.645857 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-09-16 00:41:12.645867 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.61s 2025-09-16 00:41:12.645878 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-09-16 00:41:12.645889 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2025-09-16 00:41:12.645900 | orchestrator | Print configuration data ------------------------------------------------ 0.57s 2025-09-16 00:41:12.645910 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.57s 2025-09-16 00:41:12.645921 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2025-09-16 00:41:12.645939 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2025-09-16 00:41:12.851192 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.56s 2025-09-16 00:41:12.851277 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2025-09-16 00:41:12.851291 | orchestrator | Print DB devices -------------------------------------------------------- 0.53s 2025-09-16 00:41:12.851303 | orchestrator | Set WAL devices config data --------------------------------------------- 0.52s 2025-09-16 00:41:12.851314 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2025-09-16 00:41:35.262181 | orchestrator | 2025-09-16 00:41:35 | INFO  | Task 7b064f9c-b1b2-474a-a147-7ce5f2937659 (sync inventory) is running in background. Output coming soon. 2025-09-16 00:41:59.067858 | orchestrator | 2025-09-16 00:41:36 | INFO  | Starting group_vars file reorganization 2025-09-16 00:41:59.067964 | orchestrator | 2025-09-16 00:41:36 | INFO  | Moved 0 file(s) to their respective directories 2025-09-16 00:41:59.067980 | orchestrator | 2025-09-16 00:41:36 | INFO  | Group_vars file reorganization completed 2025-09-16 00:41:59.067991 | orchestrator | 2025-09-16 00:41:39 | INFO  | Starting variable preparation from inventory 2025-09-16 00:41:59.068001 | orchestrator | 2025-09-16 00:41:42 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-16 00:41:59.068012 | orchestrator | 2025-09-16 00:41:42 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-16 00:41:59.068022 | orchestrator | 2025-09-16 00:41:42 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-16 00:41:59.068051 | orchestrator | 2025-09-16 00:41:42 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-16 00:41:59.068062 | orchestrator | 2025-09-16 00:41:42 | INFO  | Variable preparation completed 2025-09-16 00:41:59.068072 | orchestrator | 2025-09-16 00:41:43 | INFO  | Starting inventory overwrite handling 2025-09-16 00:41:59.068082 | orchestrator | 2025-09-16 00:41:43 | INFO  | Handling group overwrites in 99-overwrite 2025-09-16 00:41:59.068097 | orchestrator | 2025-09-16 00:41:43 | INFO  | Removing group frr:children from 60-generic 2025-09-16 00:41:59.068107 | orchestrator | 2025-09-16 00:41:43 | INFO  | Removing group storage:children from 50-kolla 2025-09-16 00:41:59.068117 | orchestrator | 2025-09-16 00:41:43 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-16 00:41:59.068126 | orchestrator | 2025-09-16 00:41:43 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-16 00:41:59.068137 | orchestrator | 2025-09-16 00:41:43 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-16 00:41:59.068146 | orchestrator | 2025-09-16 00:41:43 | INFO  | Handling group overwrites in 20-roles 2025-09-16 00:41:59.068156 | orchestrator | 2025-09-16 00:41:43 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-16 00:41:59.068190 | orchestrator | 2025-09-16 00:41:43 | INFO  | Removed 6 group(s) in total 2025-09-16 00:41:59.068200 | orchestrator | 2025-09-16 00:41:43 | INFO  | Inventory overwrite handling completed 2025-09-16 00:41:59.068210 | orchestrator | 2025-09-16 00:41:44 | INFO  | Starting merge of inventory files 2025-09-16 00:41:59.068219 | orchestrator | 2025-09-16 00:41:44 | INFO  | Inventory files merged successfully 2025-09-16 00:41:59.068229 | orchestrator | 2025-09-16 00:41:48 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-16 00:41:59.068239 | orchestrator | 2025-09-16 00:41:57 | INFO  | Successfully wrote ClusterShell configuration 2025-09-16 00:41:59.068249 | orchestrator | [master f1066a3] 2025-09-16-00-41 2025-09-16 00:41:59.068259 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-16 00:42:00.963291 | orchestrator | 2025-09-16 00:42:00 | INFO  | Task 87838019-6256-4f11-bece-5fa7bf7f3f24 (ceph-create-lvm-devices) was prepared for execution. 2025-09-16 00:42:00.963384 | orchestrator | 2025-09-16 00:42:00 | INFO  | It takes a moment until task 87838019-6256-4f11-bece-5fa7bf7f3f24 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-16 00:42:11.282611 | orchestrator | 2025-09-16 00:42:11.282724 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-16 00:42:11.282741 | orchestrator | 2025-09-16 00:42:11.282753 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-16 00:42:11.282827 | orchestrator | Tuesday 16 September 2025 00:42:04 +0000 (0:00:00.242) 0:00:00.242 ***** 2025-09-16 00:42:11.282843 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-16 00:42:11.282854 | orchestrator | 2025-09-16 00:42:11.282865 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-16 00:42:11.282876 | orchestrator | Tuesday 16 September 2025 00:42:05 +0000 (0:00:00.223) 0:00:00.465 ***** 2025-09-16 00:42:11.282888 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:42:11.282900 | orchestrator | 2025-09-16 00:42:11.282910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:11.282921 | orchestrator | Tuesday 16 September 2025 00:42:05 +0000 (0:00:00.195) 0:00:00.661 ***** 2025-09-16 00:42:11.282932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-16 00:42:11.282944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-16 00:42:11.282955 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-16 00:42:11.282966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-16 00:42:11.282976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-16 00:42:11.282987 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-16 00:42:11.282998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-16 00:42:11.283008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-16 00:42:11.283019 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-16 00:42:11.283030 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-16 00:42:11.283040 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-16 00:42:11.283051 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-16 00:42:11.283062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-16 00:42:11.283073 | orchestrator | 2025-09-16 00:42:11.283083 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:11.283116 | orchestrator | Tuesday 16 September 2025 00:42:05 +0000 (0:00:00.352) 0:00:01.014 ***** 2025-09-16 00:42:11.283128 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:11.283139 | orchestrator | 2025-09-16 00:42:11.283151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:11.283164 | orchestrator | Tuesday 16 September 2025 00:42:05 +0000 (0:00:00.348) 0:00:01.362 ***** 2025-09-16 00:42:11.283176 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:11.283189 | orchestrator | 2025-09-16 00:42:11.283201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:11.283213 | orchestrator | Tuesday 16 September 2025 00:42:06 +0000 (0:00:00.167) 0:00:01.530 ***** 2025-09-16 00:42:11.283225 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:11.283237 | orchestrator | 2025-09-16 00:42:11.283249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:11.283261 | orchestrator | Tuesday 16 September 2025 00:42:06 +0000 (0:00:00.181) 0:00:01.711 ***** 2025-09-16 00:42:11.283273 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:11.283286 | orchestrator | 2025-09-16 00:42:11.283298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:11.283310 | orchestrator | Tuesday 16 September 2025 00:42:06 +0000 (0:00:00.195) 0:00:01.907 ***** 2025-09-16 00:42:11.283322 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:11.283334 | orchestrator | 2025-09-16 00:42:11.283346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:11.283358 | orchestrator | Tuesday 16 September 2025 00:42:06 +0000 (0:00:00.167) 0:00:02.074 ***** 2025-09-16 00:42:11.283370 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:11.283383 | orchestrator | 2025-09-16 00:42:11.283394 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:11.283406 | orchestrator | Tuesday 16 September 2025 00:42:06 +0000 (0:00:00.198) 0:00:02.273 ***** 2025-09-16 00:42:11.283418 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:11.283431 | orchestrator | 2025-09-16 00:42:11.283443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:11.283455 | orchestrator | Tuesday 16 September 2025 00:42:07 +0000 (0:00:00.180) 0:00:02.453 ***** 2025-09-16 00:42:11.283467 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:11.283479 | orchestrator | 2025-09-16 00:42:11.283492 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:11.283504 | orchestrator | Tuesday 16 September 2025 00:42:07 +0000 (0:00:00.180) 0:00:02.634 ***** 2025-09-16 00:42:11.283515 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf) 2025-09-16 00:42:11.283527 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf) 2025-09-16 00:42:11.283537 | orchestrator | 2025-09-16 00:42:11.283548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:11.283558 | orchestrator | Tuesday 16 September 2025 00:42:07 +0000 (0:00:00.373) 0:00:03.008 ***** 2025-09-16 00:42:11.283588 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_216f9756-46fe-48b3-8a57-6cc5b7e0c275) 2025-09-16 00:42:11.283600 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_216f9756-46fe-48b3-8a57-6cc5b7e0c275) 2025-09-16 00:42:11.283611 | orchestrator | 2025-09-16 00:42:11.283621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:11.283632 | orchestrator | Tuesday 16 September 2025 00:42:08 +0000 (0:00:00.380) 0:00:03.388 ***** 2025-09-16 00:42:11.283643 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ebe7fd99-ddf0-4119-8dea-cb8b427f2aed) 2025-09-16 00:42:11.283653 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ebe7fd99-ddf0-4119-8dea-cb8b427f2aed) 2025-09-16 00:42:11.283664 | orchestrator | 2025-09-16 00:42:11.283675 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:11.283693 | orchestrator | Tuesday 16 September 2025 00:42:08 +0000 (0:00:00.511) 0:00:03.900 ***** 2025-09-16 00:42:11.283704 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6b7c66eb-e150-40bb-863f-cd4924cbb0ab) 2025-09-16 00:42:11.283715 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6b7c66eb-e150-40bb-863f-cd4924cbb0ab) 2025-09-16 00:42:11.283725 | orchestrator | 2025-09-16 00:42:11.283736 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:11.283747 | orchestrator | Tuesday 16 September 2025 00:42:09 +0000 (0:00:00.649) 0:00:04.549 ***** 2025-09-16 00:42:11.283757 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-16 00:42:11.283795 | orchestrator | 2025-09-16 00:42:11.283807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:11.283817 | orchestrator | Tuesday 16 September 2025 00:42:09 +0000 (0:00:00.275) 0:00:04.825 ***** 2025-09-16 00:42:11.283828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-16 00:42:11.283839 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-16 00:42:11.283850 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-16 00:42:11.283861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-16 00:42:11.283890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-16 00:42:11.283902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-16 00:42:11.283913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-16 00:42:11.283923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-16 00:42:11.283934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-16 00:42:11.283945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-16 00:42:11.283956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-16 00:42:11.283967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-16 00:42:11.283983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-16 00:42:11.283994 | orchestrator | 2025-09-16 00:42:11.284005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:11.284016 | orchestrator | Tuesday 16 September 2025 00:42:09 +0000 (0:00:00.365) 0:00:05.190 ***** 2025-09-16 00:42:11.284027 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:11.284038 | orchestrator | 2025-09-16 00:42:11.284049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:11.284060 | orchestrator | Tuesday 16 September 2025 00:42:10 +0000 (0:00:00.192) 0:00:05.383 ***** 2025-09-16 00:42:11.284071 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:11.284082 | orchestrator | 2025-09-16 00:42:11.284093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:11.284104 | orchestrator | Tuesday 16 September 2025 00:42:10 +0000 (0:00:00.178) 0:00:05.562 ***** 2025-09-16 00:42:11.284115 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:11.284126 | orchestrator | 2025-09-16 00:42:11.284137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:11.284148 | orchestrator | Tuesday 16 September 2025 00:42:10 +0000 (0:00:00.183) 0:00:05.745 ***** 2025-09-16 00:42:11.284158 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:11.284169 | orchestrator | 2025-09-16 00:42:11.284180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:11.284198 | orchestrator | Tuesday 16 September 2025 00:42:10 +0000 (0:00:00.173) 0:00:05.918 ***** 2025-09-16 00:42:11.284209 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:11.284220 | orchestrator | 2025-09-16 00:42:11.284232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:11.284242 | orchestrator | Tuesday 16 September 2025 00:42:10 +0000 (0:00:00.192) 0:00:06.111 ***** 2025-09-16 00:42:11.284253 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:11.284264 | orchestrator | 2025-09-16 00:42:11.284275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:11.284286 | orchestrator | Tuesday 16 September 2025 00:42:10 +0000 (0:00:00.179) 0:00:06.290 ***** 2025-09-16 00:42:11.284297 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:11.284308 | orchestrator | 2025-09-16 00:42:11.284319 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:11.284330 | orchestrator | Tuesday 16 September 2025 00:42:11 +0000 (0:00:00.176) 0:00:06.467 ***** 2025-09-16 00:42:11.284348 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.069896 | orchestrator | 2025-09-16 00:42:19.069982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:19.069994 | orchestrator | Tuesday 16 September 2025 00:42:11 +0000 (0:00:00.180) 0:00:06.648 ***** 2025-09-16 00:42:19.070002 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-16 00:42:19.070010 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-16 00:42:19.070059 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-16 00:42:19.070066 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-16 00:42:19.070073 | orchestrator | 2025-09-16 00:42:19.070080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:19.070087 | orchestrator | Tuesday 16 September 2025 00:42:12 +0000 (0:00:01.094) 0:00:07.742 ***** 2025-09-16 00:42:19.070095 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070122 | orchestrator | 2025-09-16 00:42:19.070131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:19.070138 | orchestrator | Tuesday 16 September 2025 00:42:12 +0000 (0:00:00.209) 0:00:07.952 ***** 2025-09-16 00:42:19.070144 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070151 | orchestrator | 2025-09-16 00:42:19.070158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:19.070164 | orchestrator | Tuesday 16 September 2025 00:42:12 +0000 (0:00:00.209) 0:00:08.161 ***** 2025-09-16 00:42:19.070171 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070177 | orchestrator | 2025-09-16 00:42:19.070184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:19.070191 | orchestrator | Tuesday 16 September 2025 00:42:12 +0000 (0:00:00.202) 0:00:08.363 ***** 2025-09-16 00:42:19.070198 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070204 | orchestrator | 2025-09-16 00:42:19.070211 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-16 00:42:19.070218 | orchestrator | Tuesday 16 September 2025 00:42:13 +0000 (0:00:00.204) 0:00:08.567 ***** 2025-09-16 00:42:19.070224 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070231 | orchestrator | 2025-09-16 00:42:19.070237 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-16 00:42:19.070244 | orchestrator | Tuesday 16 September 2025 00:42:13 +0000 (0:00:00.136) 0:00:08.704 ***** 2025-09-16 00:42:19.070251 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8832b43a-4370-5f7f-b8ca-e1ef860202d6'}}) 2025-09-16 00:42:19.070258 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b409e677-b998-57d2-be40-43b65c9fb72d'}}) 2025-09-16 00:42:19.070265 | orchestrator | 2025-09-16 00:42:19.070272 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-16 00:42:19.070278 | orchestrator | Tuesday 16 September 2025 00:42:13 +0000 (0:00:00.175) 0:00:08.879 ***** 2025-09-16 00:42:19.070286 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'}) 2025-09-16 00:42:19.070323 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'}) 2025-09-16 00:42:19.070331 | orchestrator | 2025-09-16 00:42:19.070337 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-16 00:42:19.070352 | orchestrator | Tuesday 16 September 2025 00:42:15 +0000 (0:00:01.939) 0:00:10.818 ***** 2025-09-16 00:42:19.070359 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:19.070367 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:19.070374 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070380 | orchestrator | 2025-09-16 00:42:19.070387 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-16 00:42:19.070395 | orchestrator | Tuesday 16 September 2025 00:42:15 +0000 (0:00:00.148) 0:00:10.967 ***** 2025-09-16 00:42:19.070403 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'}) 2025-09-16 00:42:19.070411 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'}) 2025-09-16 00:42:19.070419 | orchestrator | 2025-09-16 00:42:19.070426 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-16 00:42:19.070434 | orchestrator | Tuesday 16 September 2025 00:42:16 +0000 (0:00:01.350) 0:00:12.317 ***** 2025-09-16 00:42:19.070442 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:19.070450 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:19.070458 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070465 | orchestrator | 2025-09-16 00:42:19.070473 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-16 00:42:19.070481 | orchestrator | Tuesday 16 September 2025 00:42:17 +0000 (0:00:00.148) 0:00:12.465 ***** 2025-09-16 00:42:19.070489 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070496 | orchestrator | 2025-09-16 00:42:19.070504 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-16 00:42:19.070525 | orchestrator | Tuesday 16 September 2025 00:42:17 +0000 (0:00:00.131) 0:00:12.597 ***** 2025-09-16 00:42:19.070534 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:19.070542 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:19.070550 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070557 | orchestrator | 2025-09-16 00:42:19.070565 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-16 00:42:19.070573 | orchestrator | Tuesday 16 September 2025 00:42:17 +0000 (0:00:00.347) 0:00:12.945 ***** 2025-09-16 00:42:19.070580 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070588 | orchestrator | 2025-09-16 00:42:19.070596 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-16 00:42:19.070604 | orchestrator | Tuesday 16 September 2025 00:42:17 +0000 (0:00:00.141) 0:00:13.087 ***** 2025-09-16 00:42:19.070612 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:19.070625 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:19.070633 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070641 | orchestrator | 2025-09-16 00:42:19.070648 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-16 00:42:19.070656 | orchestrator | Tuesday 16 September 2025 00:42:17 +0000 (0:00:00.152) 0:00:13.240 ***** 2025-09-16 00:42:19.070664 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070671 | orchestrator | 2025-09-16 00:42:19.070679 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-16 00:42:19.070687 | orchestrator | Tuesday 16 September 2025 00:42:18 +0000 (0:00:00.150) 0:00:13.390 ***** 2025-09-16 00:42:19.070694 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:19.070702 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:19.070710 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070717 | orchestrator | 2025-09-16 00:42:19.070725 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-16 00:42:19.070733 | orchestrator | Tuesday 16 September 2025 00:42:18 +0000 (0:00:00.171) 0:00:13.562 ***** 2025-09-16 00:42:19.070741 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:42:19.070749 | orchestrator | 2025-09-16 00:42:19.070755 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-16 00:42:19.070782 | orchestrator | Tuesday 16 September 2025 00:42:18 +0000 (0:00:00.133) 0:00:13.696 ***** 2025-09-16 00:42:19.070806 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:19.070814 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:19.070820 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070827 | orchestrator | 2025-09-16 00:42:19.070833 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-16 00:42:19.070840 | orchestrator | Tuesday 16 September 2025 00:42:18 +0000 (0:00:00.159) 0:00:13.855 ***** 2025-09-16 00:42:19.070846 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:19.070853 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:19.070860 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070866 | orchestrator | 2025-09-16 00:42:19.070873 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-16 00:42:19.070879 | orchestrator | Tuesday 16 September 2025 00:42:18 +0000 (0:00:00.163) 0:00:14.018 ***** 2025-09-16 00:42:19.070886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:19.070892 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:19.070899 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070905 | orchestrator | 2025-09-16 00:42:19.070912 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-16 00:42:19.070919 | orchestrator | Tuesday 16 September 2025 00:42:18 +0000 (0:00:00.151) 0:00:14.170 ***** 2025-09-16 00:42:19.070925 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070937 | orchestrator | 2025-09-16 00:42:19.070943 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-16 00:42:19.070950 | orchestrator | Tuesday 16 September 2025 00:42:18 +0000 (0:00:00.127) 0:00:14.297 ***** 2025-09-16 00:42:19.070957 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:19.070963 | orchestrator | 2025-09-16 00:42:19.070974 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-16 00:42:24.941490 | orchestrator | Tuesday 16 September 2025 00:42:19 +0000 (0:00:00.138) 0:00:14.435 ***** 2025-09-16 00:42:24.941598 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.941616 | orchestrator | 2025-09-16 00:42:24.941629 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-16 00:42:24.941640 | orchestrator | Tuesday 16 September 2025 00:42:19 +0000 (0:00:00.132) 0:00:14.568 ***** 2025-09-16 00:42:24.941651 | orchestrator | ok: [testbed-node-3] => { 2025-09-16 00:42:24.941663 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-16 00:42:24.941674 | orchestrator | } 2025-09-16 00:42:24.941685 | orchestrator | 2025-09-16 00:42:24.941696 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-16 00:42:24.941707 | orchestrator | Tuesday 16 September 2025 00:42:19 +0000 (0:00:00.334) 0:00:14.902 ***** 2025-09-16 00:42:24.941718 | orchestrator | ok: [testbed-node-3] => { 2025-09-16 00:42:24.941729 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-16 00:42:24.941740 | orchestrator | } 2025-09-16 00:42:24.941750 | orchestrator | 2025-09-16 00:42:24.941791 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-16 00:42:24.941809 | orchestrator | Tuesday 16 September 2025 00:42:19 +0000 (0:00:00.143) 0:00:15.045 ***** 2025-09-16 00:42:24.941820 | orchestrator | ok: [testbed-node-3] => { 2025-09-16 00:42:24.941831 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-16 00:42:24.941842 | orchestrator | } 2025-09-16 00:42:24.941854 | orchestrator | 2025-09-16 00:42:24.941865 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-16 00:42:24.941876 | orchestrator | Tuesday 16 September 2025 00:42:19 +0000 (0:00:00.146) 0:00:15.192 ***** 2025-09-16 00:42:24.941887 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:42:24.941898 | orchestrator | 2025-09-16 00:42:24.941908 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-16 00:42:24.941919 | orchestrator | Tuesday 16 September 2025 00:42:20 +0000 (0:00:00.656) 0:00:15.848 ***** 2025-09-16 00:42:24.941930 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:42:24.941940 | orchestrator | 2025-09-16 00:42:24.941951 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-16 00:42:24.941961 | orchestrator | Tuesday 16 September 2025 00:42:21 +0000 (0:00:00.527) 0:00:16.376 ***** 2025-09-16 00:42:24.941972 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:42:24.941983 | orchestrator | 2025-09-16 00:42:24.941993 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-16 00:42:24.942004 | orchestrator | Tuesday 16 September 2025 00:42:21 +0000 (0:00:00.532) 0:00:16.908 ***** 2025-09-16 00:42:24.942015 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:42:24.942099 | orchestrator | 2025-09-16 00:42:24.942146 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-16 00:42:24.942157 | orchestrator | Tuesday 16 September 2025 00:42:21 +0000 (0:00:00.159) 0:00:17.068 ***** 2025-09-16 00:42:24.942167 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.942178 | orchestrator | 2025-09-16 00:42:24.942189 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-16 00:42:24.942200 | orchestrator | Tuesday 16 September 2025 00:42:21 +0000 (0:00:00.112) 0:00:17.180 ***** 2025-09-16 00:42:24.942211 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.942221 | orchestrator | 2025-09-16 00:42:24.942232 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-16 00:42:24.942243 | orchestrator | Tuesday 16 September 2025 00:42:21 +0000 (0:00:00.111) 0:00:17.291 ***** 2025-09-16 00:42:24.942253 | orchestrator | ok: [testbed-node-3] => { 2025-09-16 00:42:24.942283 | orchestrator |  "vgs_report": { 2025-09-16 00:42:24.942303 | orchestrator |  "vg": [] 2025-09-16 00:42:24.942314 | orchestrator |  } 2025-09-16 00:42:24.942325 | orchestrator | } 2025-09-16 00:42:24.942336 | orchestrator | 2025-09-16 00:42:24.942347 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-16 00:42:24.942358 | orchestrator | Tuesday 16 September 2025 00:42:22 +0000 (0:00:00.145) 0:00:17.437 ***** 2025-09-16 00:42:24.942368 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.942379 | orchestrator | 2025-09-16 00:42:24.942390 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-16 00:42:24.942400 | orchestrator | Tuesday 16 September 2025 00:42:22 +0000 (0:00:00.130) 0:00:17.568 ***** 2025-09-16 00:42:24.942411 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.942421 | orchestrator | 2025-09-16 00:42:24.942432 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-16 00:42:24.942442 | orchestrator | Tuesday 16 September 2025 00:42:22 +0000 (0:00:00.092) 0:00:17.661 ***** 2025-09-16 00:42:24.942453 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.942463 | orchestrator | 2025-09-16 00:42:24.942474 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-16 00:42:24.942484 | orchestrator | Tuesday 16 September 2025 00:42:22 +0000 (0:00:00.217) 0:00:17.879 ***** 2025-09-16 00:42:24.942495 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.942505 | orchestrator | 2025-09-16 00:42:24.942516 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-16 00:42:24.942527 | orchestrator | Tuesday 16 September 2025 00:42:22 +0000 (0:00:00.136) 0:00:18.015 ***** 2025-09-16 00:42:24.942537 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.942548 | orchestrator | 2025-09-16 00:42:24.942559 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-16 00:42:24.942569 | orchestrator | Tuesday 16 September 2025 00:42:22 +0000 (0:00:00.144) 0:00:18.160 ***** 2025-09-16 00:42:24.942580 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.942590 | orchestrator | 2025-09-16 00:42:24.942601 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-16 00:42:24.942611 | orchestrator | Tuesday 16 September 2025 00:42:22 +0000 (0:00:00.130) 0:00:18.290 ***** 2025-09-16 00:42:24.942622 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.942632 | orchestrator | 2025-09-16 00:42:24.942643 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-16 00:42:24.942654 | orchestrator | Tuesday 16 September 2025 00:42:23 +0000 (0:00:00.112) 0:00:18.403 ***** 2025-09-16 00:42:24.942664 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.942675 | orchestrator | 2025-09-16 00:42:24.942686 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-16 00:42:24.942714 | orchestrator | Tuesday 16 September 2025 00:42:23 +0000 (0:00:00.135) 0:00:18.539 ***** 2025-09-16 00:42:24.942725 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.942736 | orchestrator | 2025-09-16 00:42:24.942746 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-16 00:42:24.942757 | orchestrator | Tuesday 16 September 2025 00:42:23 +0000 (0:00:00.115) 0:00:18.654 ***** 2025-09-16 00:42:24.942815 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.942834 | orchestrator | 2025-09-16 00:42:24.942851 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-16 00:42:24.942863 | orchestrator | Tuesday 16 September 2025 00:42:23 +0000 (0:00:00.112) 0:00:18.767 ***** 2025-09-16 00:42:24.942873 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.942884 | orchestrator | 2025-09-16 00:42:24.942895 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-16 00:42:24.942905 | orchestrator | Tuesday 16 September 2025 00:42:23 +0000 (0:00:00.117) 0:00:18.884 ***** 2025-09-16 00:42:24.942916 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.942927 | orchestrator | 2025-09-16 00:42:24.942946 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-16 00:42:24.942957 | orchestrator | Tuesday 16 September 2025 00:42:23 +0000 (0:00:00.122) 0:00:19.007 ***** 2025-09-16 00:42:24.942968 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.942979 | orchestrator | 2025-09-16 00:42:24.942989 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-16 00:42:24.943000 | orchestrator | Tuesday 16 September 2025 00:42:23 +0000 (0:00:00.123) 0:00:19.130 ***** 2025-09-16 00:42:24.943010 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.943021 | orchestrator | 2025-09-16 00:42:24.943031 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-16 00:42:24.943042 | orchestrator | Tuesday 16 September 2025 00:42:23 +0000 (0:00:00.142) 0:00:19.273 ***** 2025-09-16 00:42:24.943054 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:24.943066 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:24.943077 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.943088 | orchestrator | 2025-09-16 00:42:24.943098 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-16 00:42:24.943109 | orchestrator | Tuesday 16 September 2025 00:42:24 +0000 (0:00:00.236) 0:00:19.509 ***** 2025-09-16 00:42:24.943119 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:24.943130 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:24.943141 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.943151 | orchestrator | 2025-09-16 00:42:24.943162 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-16 00:42:24.943173 | orchestrator | Tuesday 16 September 2025 00:42:24 +0000 (0:00:00.138) 0:00:19.647 ***** 2025-09-16 00:42:24.943184 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:24.943194 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:24.943205 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.943215 | orchestrator | 2025-09-16 00:42:24.943226 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-16 00:42:24.943236 | orchestrator | Tuesday 16 September 2025 00:42:24 +0000 (0:00:00.143) 0:00:19.791 ***** 2025-09-16 00:42:24.943247 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:24.943258 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:24.943268 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.943279 | orchestrator | 2025-09-16 00:42:24.943289 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-16 00:42:24.943300 | orchestrator | Tuesday 16 September 2025 00:42:24 +0000 (0:00:00.203) 0:00:19.995 ***** 2025-09-16 00:42:24.943311 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:24.943321 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:24.943332 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:24.943349 | orchestrator | 2025-09-16 00:42:24.943359 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-16 00:42:24.943370 | orchestrator | Tuesday 16 September 2025 00:42:24 +0000 (0:00:00.167) 0:00:20.162 ***** 2025-09-16 00:42:24.943388 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:24.943407 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:30.088275 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:30.088386 | orchestrator | 2025-09-16 00:42:30.088403 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-16 00:42:30.088418 | orchestrator | Tuesday 16 September 2025 00:42:24 +0000 (0:00:00.146) 0:00:20.309 ***** 2025-09-16 00:42:30.088430 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:30.088443 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:30.088454 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:30.088465 | orchestrator | 2025-09-16 00:42:30.088476 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-16 00:42:30.088487 | orchestrator | Tuesday 16 September 2025 00:42:25 +0000 (0:00:00.155) 0:00:20.465 ***** 2025-09-16 00:42:30.088498 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:30.088510 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:30.088520 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:30.088532 | orchestrator | 2025-09-16 00:42:30.088543 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-16 00:42:30.088554 | orchestrator | Tuesday 16 September 2025 00:42:25 +0000 (0:00:00.159) 0:00:20.625 ***** 2025-09-16 00:42:30.088565 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:42:30.088576 | orchestrator | 2025-09-16 00:42:30.088587 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-16 00:42:30.088598 | orchestrator | Tuesday 16 September 2025 00:42:25 +0000 (0:00:00.486) 0:00:21.111 ***** 2025-09-16 00:42:30.088609 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:42:30.088620 | orchestrator | 2025-09-16 00:42:30.088630 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-16 00:42:30.088641 | orchestrator | Tuesday 16 September 2025 00:42:26 +0000 (0:00:00.512) 0:00:21.623 ***** 2025-09-16 00:42:30.088652 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:42:30.088663 | orchestrator | 2025-09-16 00:42:30.088673 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-16 00:42:30.088684 | orchestrator | Tuesday 16 September 2025 00:42:26 +0000 (0:00:00.129) 0:00:21.753 ***** 2025-09-16 00:42:30.088695 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'vg_name': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'}) 2025-09-16 00:42:30.088708 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'vg_name': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'}) 2025-09-16 00:42:30.088718 | orchestrator | 2025-09-16 00:42:30.088746 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-16 00:42:30.088758 | orchestrator | Tuesday 16 September 2025 00:42:26 +0000 (0:00:00.180) 0:00:21.933 ***** 2025-09-16 00:42:30.088801 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:30.088835 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:30.088848 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:30.088861 | orchestrator | 2025-09-16 00:42:30.088873 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-16 00:42:30.088886 | orchestrator | Tuesday 16 September 2025 00:42:26 +0000 (0:00:00.277) 0:00:22.211 ***** 2025-09-16 00:42:30.088899 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:30.088912 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:30.088924 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:30.088937 | orchestrator | 2025-09-16 00:42:30.088950 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-16 00:42:30.088963 | orchestrator | Tuesday 16 September 2025 00:42:26 +0000 (0:00:00.129) 0:00:22.340 ***** 2025-09-16 00:42:30.088977 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'})  2025-09-16 00:42:30.088990 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'})  2025-09-16 00:42:30.089003 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:42:30.089016 | orchestrator | 2025-09-16 00:42:30.089029 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-16 00:42:30.089042 | orchestrator | Tuesday 16 September 2025 00:42:27 +0000 (0:00:00.117) 0:00:22.457 ***** 2025-09-16 00:42:30.089055 | orchestrator | ok: [testbed-node-3] => { 2025-09-16 00:42:30.089068 | orchestrator |  "lvm_report": { 2025-09-16 00:42:30.089106 | orchestrator |  "lv": [ 2025-09-16 00:42:30.089119 | orchestrator |  { 2025-09-16 00:42:30.089151 | orchestrator |  "lv_name": "osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6", 2025-09-16 00:42:30.089165 | orchestrator |  "vg_name": "ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6" 2025-09-16 00:42:30.089179 | orchestrator |  }, 2025-09-16 00:42:30.089190 | orchestrator |  { 2025-09-16 00:42:30.089201 | orchestrator |  "lv_name": "osd-block-b409e677-b998-57d2-be40-43b65c9fb72d", 2025-09-16 00:42:30.089211 | orchestrator |  "vg_name": "ceph-b409e677-b998-57d2-be40-43b65c9fb72d" 2025-09-16 00:42:30.089222 | orchestrator |  } 2025-09-16 00:42:30.089233 | orchestrator |  ], 2025-09-16 00:42:30.089244 | orchestrator |  "pv": [ 2025-09-16 00:42:30.089254 | orchestrator |  { 2025-09-16 00:42:30.089265 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-16 00:42:30.089276 | orchestrator |  "vg_name": "ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6" 2025-09-16 00:42:30.089286 | orchestrator |  }, 2025-09-16 00:42:30.089297 | orchestrator |  { 2025-09-16 00:42:30.089308 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-16 00:42:30.089318 | orchestrator |  "vg_name": "ceph-b409e677-b998-57d2-be40-43b65c9fb72d" 2025-09-16 00:42:30.089329 | orchestrator |  } 2025-09-16 00:42:30.089340 | orchestrator |  ] 2025-09-16 00:42:30.089350 | orchestrator |  } 2025-09-16 00:42:30.089362 | orchestrator | } 2025-09-16 00:42:30.089373 | orchestrator | 2025-09-16 00:42:30.089384 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-16 00:42:30.089394 | orchestrator | 2025-09-16 00:42:30.089405 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-16 00:42:30.089416 | orchestrator | Tuesday 16 September 2025 00:42:27 +0000 (0:00:00.254) 0:00:22.712 ***** 2025-09-16 00:42:30.089427 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-16 00:42:30.089447 | orchestrator | 2025-09-16 00:42:30.089458 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-16 00:42:30.089469 | orchestrator | Tuesday 16 September 2025 00:42:27 +0000 (0:00:00.212) 0:00:22.925 ***** 2025-09-16 00:42:30.089480 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:42:30.089491 | orchestrator | 2025-09-16 00:42:30.089502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:30.089512 | orchestrator | Tuesday 16 September 2025 00:42:27 +0000 (0:00:00.213) 0:00:23.138 ***** 2025-09-16 00:42:30.089523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-16 00:42:30.089534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-16 00:42:30.089544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-16 00:42:30.089555 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-16 00:42:30.089566 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-16 00:42:30.089577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-16 00:42:30.089587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-16 00:42:30.089604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-16 00:42:30.089615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-16 00:42:30.089626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-16 00:42:30.089637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-16 00:42:30.089647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-16 00:42:30.089658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-16 00:42:30.089669 | orchestrator | 2025-09-16 00:42:30.089680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:30.089690 | orchestrator | Tuesday 16 September 2025 00:42:28 +0000 (0:00:00.417) 0:00:23.555 ***** 2025-09-16 00:42:30.089701 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:30.089712 | orchestrator | 2025-09-16 00:42:30.089722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:30.089733 | orchestrator | Tuesday 16 September 2025 00:42:28 +0000 (0:00:00.224) 0:00:23.780 ***** 2025-09-16 00:42:30.089744 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:30.089755 | orchestrator | 2025-09-16 00:42:30.089790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:30.089802 | orchestrator | Tuesday 16 September 2025 00:42:28 +0000 (0:00:00.189) 0:00:23.970 ***** 2025-09-16 00:42:30.089813 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:30.089824 | orchestrator | 2025-09-16 00:42:30.089835 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:30.089846 | orchestrator | Tuesday 16 September 2025 00:42:29 +0000 (0:00:00.681) 0:00:24.651 ***** 2025-09-16 00:42:30.089857 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:30.089868 | orchestrator | 2025-09-16 00:42:30.089879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:30.089890 | orchestrator | Tuesday 16 September 2025 00:42:29 +0000 (0:00:00.194) 0:00:24.846 ***** 2025-09-16 00:42:30.089901 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:30.089912 | orchestrator | 2025-09-16 00:42:30.089923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:30.089934 | orchestrator | Tuesday 16 September 2025 00:42:29 +0000 (0:00:00.198) 0:00:25.044 ***** 2025-09-16 00:42:30.089945 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:30.089955 | orchestrator | 2025-09-16 00:42:30.089973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:30.089984 | orchestrator | Tuesday 16 September 2025 00:42:29 +0000 (0:00:00.205) 0:00:25.250 ***** 2025-09-16 00:42:30.089995 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:30.090006 | orchestrator | 2025-09-16 00:42:30.090084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:40.457977 | orchestrator | Tuesday 16 September 2025 00:42:30 +0000 (0:00:00.202) 0:00:25.452 ***** 2025-09-16 00:42:40.458126 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:40.458150 | orchestrator | 2025-09-16 00:42:40.458160 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:40.458169 | orchestrator | Tuesday 16 September 2025 00:42:30 +0000 (0:00:00.200) 0:00:25.652 ***** 2025-09-16 00:42:40.458178 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370) 2025-09-16 00:42:40.458187 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370) 2025-09-16 00:42:40.458196 | orchestrator | 2025-09-16 00:42:40.458213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:40.458222 | orchestrator | Tuesday 16 September 2025 00:42:30 +0000 (0:00:00.408) 0:00:26.061 ***** 2025-09-16 00:42:40.458230 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5c63af0b-1be6-4a9c-8f35-a4445080f1db) 2025-09-16 00:42:40.458238 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5c63af0b-1be6-4a9c-8f35-a4445080f1db) 2025-09-16 00:42:40.458246 | orchestrator | 2025-09-16 00:42:40.458254 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:40.458262 | orchestrator | Tuesday 16 September 2025 00:42:31 +0000 (0:00:00.412) 0:00:26.474 ***** 2025-09-16 00:42:40.458271 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_da9e83cb-2e5e-4388-ad73-1879a24665a3) 2025-09-16 00:42:40.458278 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_da9e83cb-2e5e-4388-ad73-1879a24665a3) 2025-09-16 00:42:40.458286 | orchestrator | 2025-09-16 00:42:40.458294 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:40.458302 | orchestrator | Tuesday 16 September 2025 00:42:31 +0000 (0:00:00.418) 0:00:26.893 ***** 2025-09-16 00:42:40.458310 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_46481bd5-1fc4-4619-9f81-82a2d5c944be) 2025-09-16 00:42:40.458318 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_46481bd5-1fc4-4619-9f81-82a2d5c944be) 2025-09-16 00:42:40.458326 | orchestrator | 2025-09-16 00:42:40.458333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:40.458341 | orchestrator | Tuesday 16 September 2025 00:42:31 +0000 (0:00:00.405) 0:00:27.298 ***** 2025-09-16 00:42:40.458349 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-16 00:42:40.458357 | orchestrator | 2025-09-16 00:42:40.458365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:40.458373 | orchestrator | Tuesday 16 September 2025 00:42:32 +0000 (0:00:00.352) 0:00:27.651 ***** 2025-09-16 00:42:40.458381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-16 00:42:40.458389 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-16 00:42:40.458397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-16 00:42:40.458405 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-16 00:42:40.458413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-16 00:42:40.458421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-16 00:42:40.458445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-16 00:42:40.458470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-16 00:42:40.458478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-16 00:42:40.458486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-16 00:42:40.458494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-16 00:42:40.458503 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-16 00:42:40.458512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-16 00:42:40.458520 | orchestrator | 2025-09-16 00:42:40.458529 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:40.458538 | orchestrator | Tuesday 16 September 2025 00:42:32 +0000 (0:00:00.622) 0:00:28.273 ***** 2025-09-16 00:42:40.458547 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:40.458556 | orchestrator | 2025-09-16 00:42:40.458565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:40.458574 | orchestrator | Tuesday 16 September 2025 00:42:33 +0000 (0:00:00.195) 0:00:28.469 ***** 2025-09-16 00:42:40.458583 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:40.458592 | orchestrator | 2025-09-16 00:42:40.458601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:40.458610 | orchestrator | Tuesday 16 September 2025 00:42:33 +0000 (0:00:00.207) 0:00:28.677 ***** 2025-09-16 00:42:40.458619 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:40.458627 | orchestrator | 2025-09-16 00:42:40.458636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:40.458646 | orchestrator | Tuesday 16 September 2025 00:42:33 +0000 (0:00:00.211) 0:00:28.888 ***** 2025-09-16 00:42:40.458654 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:40.458663 | orchestrator | 2025-09-16 00:42:40.458687 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:40.458696 | orchestrator | Tuesday 16 September 2025 00:42:33 +0000 (0:00:00.200) 0:00:29.088 ***** 2025-09-16 00:42:40.458705 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:40.458714 | orchestrator | 2025-09-16 00:42:40.458723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:40.458732 | orchestrator | Tuesday 16 September 2025 00:42:33 +0000 (0:00:00.234) 0:00:29.323 ***** 2025-09-16 00:42:40.458741 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:40.458750 | orchestrator | 2025-09-16 00:42:40.458775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:40.458785 | orchestrator | Tuesday 16 September 2025 00:42:34 +0000 (0:00:00.229) 0:00:29.552 ***** 2025-09-16 00:42:40.458793 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:40.458802 | orchestrator | 2025-09-16 00:42:40.458811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:40.458820 | orchestrator | Tuesday 16 September 2025 00:42:34 +0000 (0:00:00.168) 0:00:29.720 ***** 2025-09-16 00:42:40.458829 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:40.458837 | orchestrator | 2025-09-16 00:42:40.458847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:40.458855 | orchestrator | Tuesday 16 September 2025 00:42:34 +0000 (0:00:00.238) 0:00:29.959 ***** 2025-09-16 00:42:40.458863 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-16 00:42:40.458871 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-16 00:42:40.458879 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-16 00:42:40.458887 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-16 00:42:40.458895 | orchestrator | 2025-09-16 00:42:40.458903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:40.458911 | orchestrator | Tuesday 16 September 2025 00:42:35 +0000 (0:00:00.869) 0:00:30.828 ***** 2025-09-16 00:42:40.458925 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:40.458933 | orchestrator | 2025-09-16 00:42:40.458941 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:40.458949 | orchestrator | Tuesday 16 September 2025 00:42:35 +0000 (0:00:00.226) 0:00:31.054 ***** 2025-09-16 00:42:40.458957 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:40.458964 | orchestrator | 2025-09-16 00:42:40.458972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:40.458980 | orchestrator | Tuesday 16 September 2025 00:42:35 +0000 (0:00:00.174) 0:00:31.229 ***** 2025-09-16 00:42:40.458988 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:40.458995 | orchestrator | 2025-09-16 00:42:40.459003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:40.459011 | orchestrator | Tuesday 16 September 2025 00:42:36 +0000 (0:00:00.658) 0:00:31.887 ***** 2025-09-16 00:42:40.459019 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:40.459027 | orchestrator | 2025-09-16 00:42:40.459034 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-16 00:42:40.459042 | orchestrator | Tuesday 16 September 2025 00:42:36 +0000 (0:00:00.205) 0:00:32.093 ***** 2025-09-16 00:42:40.459054 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:40.459062 | orchestrator | 2025-09-16 00:42:40.459070 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-16 00:42:40.459078 | orchestrator | Tuesday 16 September 2025 00:42:36 +0000 (0:00:00.133) 0:00:32.226 ***** 2025-09-16 00:42:40.459086 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a154e298-15cb-5d50-9a1c-17bc1371db7e'}}) 2025-09-16 00:42:40.459094 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '56010334-63d7-5603-a2fe-432c47d6dcb8'}}) 2025-09-16 00:42:40.459102 | orchestrator | 2025-09-16 00:42:40.459110 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-16 00:42:40.459117 | orchestrator | Tuesday 16 September 2025 00:42:37 +0000 (0:00:00.193) 0:00:32.419 ***** 2025-09-16 00:42:40.459126 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'}) 2025-09-16 00:42:40.459135 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'}) 2025-09-16 00:42:40.459143 | orchestrator | 2025-09-16 00:42:40.459151 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-16 00:42:40.459159 | orchestrator | Tuesday 16 September 2025 00:42:38 +0000 (0:00:01.881) 0:00:34.301 ***** 2025-09-16 00:42:40.459167 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:40.459176 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:40.459183 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:40.459191 | orchestrator | 2025-09-16 00:42:40.459199 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-16 00:42:40.459207 | orchestrator | Tuesday 16 September 2025 00:42:39 +0000 (0:00:00.201) 0:00:34.503 ***** 2025-09-16 00:42:40.459214 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'}) 2025-09-16 00:42:40.459222 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'}) 2025-09-16 00:42:40.459230 | orchestrator | 2025-09-16 00:42:40.459244 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-16 00:42:45.917942 | orchestrator | Tuesday 16 September 2025 00:42:40 +0000 (0:00:01.319) 0:00:35.822 ***** 2025-09-16 00:42:45.918126 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:45.918149 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:45.918161 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.918173 | orchestrator | 2025-09-16 00:42:45.918185 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-16 00:42:45.918196 | orchestrator | Tuesday 16 September 2025 00:42:40 +0000 (0:00:00.163) 0:00:35.985 ***** 2025-09-16 00:42:45.918207 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.918218 | orchestrator | 2025-09-16 00:42:45.918229 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-16 00:42:45.918240 | orchestrator | Tuesday 16 September 2025 00:42:40 +0000 (0:00:00.139) 0:00:36.125 ***** 2025-09-16 00:42:45.918251 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:45.918263 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:45.918273 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.918284 | orchestrator | 2025-09-16 00:42:45.918295 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-16 00:42:45.918306 | orchestrator | Tuesday 16 September 2025 00:42:40 +0000 (0:00:00.186) 0:00:36.312 ***** 2025-09-16 00:42:45.918317 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.918327 | orchestrator | 2025-09-16 00:42:45.918338 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-16 00:42:45.918349 | orchestrator | Tuesday 16 September 2025 00:42:41 +0000 (0:00:00.129) 0:00:36.441 ***** 2025-09-16 00:42:45.918360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:45.918371 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:45.918382 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.918393 | orchestrator | 2025-09-16 00:42:45.918404 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-16 00:42:45.918414 | orchestrator | Tuesday 16 September 2025 00:42:41 +0000 (0:00:00.147) 0:00:36.589 ***** 2025-09-16 00:42:45.918439 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.918450 | orchestrator | 2025-09-16 00:42:45.918461 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-16 00:42:45.918473 | orchestrator | Tuesday 16 September 2025 00:42:41 +0000 (0:00:00.319) 0:00:36.908 ***** 2025-09-16 00:42:45.918487 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:45.918499 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:45.918512 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.918524 | orchestrator | 2025-09-16 00:42:45.918536 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-16 00:42:45.918549 | orchestrator | Tuesday 16 September 2025 00:42:41 +0000 (0:00:00.143) 0:00:37.052 ***** 2025-09-16 00:42:45.918561 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:42:45.918574 | orchestrator | 2025-09-16 00:42:45.918586 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-16 00:42:45.918600 | orchestrator | Tuesday 16 September 2025 00:42:41 +0000 (0:00:00.146) 0:00:37.198 ***** 2025-09-16 00:42:45.918622 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:45.918635 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:45.918648 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.918661 | orchestrator | 2025-09-16 00:42:45.918674 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-16 00:42:45.918686 | orchestrator | Tuesday 16 September 2025 00:42:41 +0000 (0:00:00.144) 0:00:37.342 ***** 2025-09-16 00:42:45.918698 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:45.918711 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:45.918724 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.918736 | orchestrator | 2025-09-16 00:42:45.918748 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-16 00:42:45.918784 | orchestrator | Tuesday 16 September 2025 00:42:42 +0000 (0:00:00.142) 0:00:37.485 ***** 2025-09-16 00:42:45.918815 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:45.918829 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:45.918842 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.918853 | orchestrator | 2025-09-16 00:42:45.918864 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-16 00:42:45.918874 | orchestrator | Tuesday 16 September 2025 00:42:42 +0000 (0:00:00.152) 0:00:37.637 ***** 2025-09-16 00:42:45.918885 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.918896 | orchestrator | 2025-09-16 00:42:45.918907 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-16 00:42:45.918918 | orchestrator | Tuesday 16 September 2025 00:42:42 +0000 (0:00:00.143) 0:00:37.781 ***** 2025-09-16 00:42:45.918929 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.918939 | orchestrator | 2025-09-16 00:42:45.918950 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-16 00:42:45.918961 | orchestrator | Tuesday 16 September 2025 00:42:42 +0000 (0:00:00.135) 0:00:37.916 ***** 2025-09-16 00:42:45.918971 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.918982 | orchestrator | 2025-09-16 00:42:45.918993 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-16 00:42:45.919004 | orchestrator | Tuesday 16 September 2025 00:42:42 +0000 (0:00:00.135) 0:00:38.052 ***** 2025-09-16 00:42:45.919015 | orchestrator | ok: [testbed-node-4] => { 2025-09-16 00:42:45.919026 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-16 00:42:45.919037 | orchestrator | } 2025-09-16 00:42:45.919048 | orchestrator | 2025-09-16 00:42:45.919059 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-16 00:42:45.919070 | orchestrator | Tuesday 16 September 2025 00:42:42 +0000 (0:00:00.136) 0:00:38.189 ***** 2025-09-16 00:42:45.919081 | orchestrator | ok: [testbed-node-4] => { 2025-09-16 00:42:45.919091 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-16 00:42:45.919102 | orchestrator | } 2025-09-16 00:42:45.919113 | orchestrator | 2025-09-16 00:42:45.919124 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-16 00:42:45.919134 | orchestrator | Tuesday 16 September 2025 00:42:42 +0000 (0:00:00.148) 0:00:38.337 ***** 2025-09-16 00:42:45.919145 | orchestrator | ok: [testbed-node-4] => { 2025-09-16 00:42:45.919156 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-16 00:42:45.919174 | orchestrator | } 2025-09-16 00:42:45.919184 | orchestrator | 2025-09-16 00:42:45.919195 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-16 00:42:45.919206 | orchestrator | Tuesday 16 September 2025 00:42:43 +0000 (0:00:00.132) 0:00:38.469 ***** 2025-09-16 00:42:45.919217 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:42:45.919228 | orchestrator | 2025-09-16 00:42:45.919239 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-16 00:42:45.919249 | orchestrator | Tuesday 16 September 2025 00:42:43 +0000 (0:00:00.770) 0:00:39.239 ***** 2025-09-16 00:42:45.919260 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:42:45.919271 | orchestrator | 2025-09-16 00:42:45.919282 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-16 00:42:45.919293 | orchestrator | Tuesday 16 September 2025 00:42:44 +0000 (0:00:00.510) 0:00:39.750 ***** 2025-09-16 00:42:45.919304 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:42:45.919314 | orchestrator | 2025-09-16 00:42:45.919325 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-16 00:42:45.919336 | orchestrator | Tuesday 16 September 2025 00:42:44 +0000 (0:00:00.519) 0:00:40.269 ***** 2025-09-16 00:42:45.919346 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:42:45.919357 | orchestrator | 2025-09-16 00:42:45.919368 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-16 00:42:45.919379 | orchestrator | Tuesday 16 September 2025 00:42:45 +0000 (0:00:00.140) 0:00:40.410 ***** 2025-09-16 00:42:45.919389 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.919400 | orchestrator | 2025-09-16 00:42:45.919410 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-16 00:42:45.919421 | orchestrator | Tuesday 16 September 2025 00:42:45 +0000 (0:00:00.108) 0:00:40.518 ***** 2025-09-16 00:42:45.919439 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.919450 | orchestrator | 2025-09-16 00:42:45.919461 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-16 00:42:45.919472 | orchestrator | Tuesday 16 September 2025 00:42:45 +0000 (0:00:00.104) 0:00:40.623 ***** 2025-09-16 00:42:45.919483 | orchestrator | ok: [testbed-node-4] => { 2025-09-16 00:42:45.919494 | orchestrator |  "vgs_report": { 2025-09-16 00:42:45.919505 | orchestrator |  "vg": [] 2025-09-16 00:42:45.919516 | orchestrator |  } 2025-09-16 00:42:45.919528 | orchestrator | } 2025-09-16 00:42:45.919538 | orchestrator | 2025-09-16 00:42:45.919549 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-16 00:42:45.919560 | orchestrator | Tuesday 16 September 2025 00:42:45 +0000 (0:00:00.148) 0:00:40.771 ***** 2025-09-16 00:42:45.919571 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.919581 | orchestrator | 2025-09-16 00:42:45.919592 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-16 00:42:45.919603 | orchestrator | Tuesday 16 September 2025 00:42:45 +0000 (0:00:00.135) 0:00:40.907 ***** 2025-09-16 00:42:45.919613 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.919624 | orchestrator | 2025-09-16 00:42:45.919635 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-16 00:42:45.919646 | orchestrator | Tuesday 16 September 2025 00:42:45 +0000 (0:00:00.122) 0:00:41.030 ***** 2025-09-16 00:42:45.919656 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.919667 | orchestrator | 2025-09-16 00:42:45.919678 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-16 00:42:45.919688 | orchestrator | Tuesday 16 September 2025 00:42:45 +0000 (0:00:00.132) 0:00:41.163 ***** 2025-09-16 00:42:45.919699 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:45.919710 | orchestrator | 2025-09-16 00:42:45.919721 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-16 00:42:45.919738 | orchestrator | Tuesday 16 September 2025 00:42:45 +0000 (0:00:00.121) 0:00:41.284 ***** 2025-09-16 00:42:50.554116 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.554212 | orchestrator | 2025-09-16 00:42:50.554248 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-16 00:42:50.554261 | orchestrator | Tuesday 16 September 2025 00:42:46 +0000 (0:00:00.125) 0:00:41.410 ***** 2025-09-16 00:42:50.554271 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.554280 | orchestrator | 2025-09-16 00:42:50.554290 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-16 00:42:50.554300 | orchestrator | Tuesday 16 September 2025 00:42:46 +0000 (0:00:00.323) 0:00:41.733 ***** 2025-09-16 00:42:50.554309 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.554319 | orchestrator | 2025-09-16 00:42:50.554328 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-16 00:42:50.554338 | orchestrator | Tuesday 16 September 2025 00:42:46 +0000 (0:00:00.144) 0:00:41.877 ***** 2025-09-16 00:42:50.554348 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.554357 | orchestrator | 2025-09-16 00:42:50.554366 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-16 00:42:50.554376 | orchestrator | Tuesday 16 September 2025 00:42:46 +0000 (0:00:00.126) 0:00:42.003 ***** 2025-09-16 00:42:50.554385 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.554394 | orchestrator | 2025-09-16 00:42:50.554404 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-16 00:42:50.554413 | orchestrator | Tuesday 16 September 2025 00:42:46 +0000 (0:00:00.137) 0:00:42.140 ***** 2025-09-16 00:42:50.554423 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.554432 | orchestrator | 2025-09-16 00:42:50.554442 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-16 00:42:50.554451 | orchestrator | Tuesday 16 September 2025 00:42:46 +0000 (0:00:00.118) 0:00:42.259 ***** 2025-09-16 00:42:50.554461 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.554470 | orchestrator | 2025-09-16 00:42:50.554480 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-16 00:42:50.554489 | orchestrator | Tuesday 16 September 2025 00:42:47 +0000 (0:00:00.151) 0:00:42.411 ***** 2025-09-16 00:42:50.554498 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.554508 | orchestrator | 2025-09-16 00:42:50.554517 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-16 00:42:50.554527 | orchestrator | Tuesday 16 September 2025 00:42:47 +0000 (0:00:00.133) 0:00:42.544 ***** 2025-09-16 00:42:50.554536 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.554546 | orchestrator | 2025-09-16 00:42:50.554555 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-16 00:42:50.554564 | orchestrator | Tuesday 16 September 2025 00:42:47 +0000 (0:00:00.126) 0:00:42.671 ***** 2025-09-16 00:42:50.554574 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.554585 | orchestrator | 2025-09-16 00:42:50.554597 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-16 00:42:50.554609 | orchestrator | Tuesday 16 September 2025 00:42:47 +0000 (0:00:00.148) 0:00:42.819 ***** 2025-09-16 00:42:50.554633 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:50.554652 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:50.554670 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.554691 | orchestrator | 2025-09-16 00:42:50.554717 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-16 00:42:50.554734 | orchestrator | Tuesday 16 September 2025 00:42:47 +0000 (0:00:00.151) 0:00:42.971 ***** 2025-09-16 00:42:50.554752 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:50.554798 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:50.554829 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.554845 | orchestrator | 2025-09-16 00:42:50.554861 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-16 00:42:50.554877 | orchestrator | Tuesday 16 September 2025 00:42:47 +0000 (0:00:00.145) 0:00:43.116 ***** 2025-09-16 00:42:50.554894 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:50.554912 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:50.554930 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.554946 | orchestrator | 2025-09-16 00:42:50.554962 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-16 00:42:50.554979 | orchestrator | Tuesday 16 September 2025 00:42:47 +0000 (0:00:00.145) 0:00:43.261 ***** 2025-09-16 00:42:50.554997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:50.555014 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:50.555030 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.555046 | orchestrator | 2025-09-16 00:42:50.555056 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-16 00:42:50.555085 | orchestrator | Tuesday 16 September 2025 00:42:48 +0000 (0:00:00.335) 0:00:43.597 ***** 2025-09-16 00:42:50.555096 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:50.555106 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:50.555115 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.555125 | orchestrator | 2025-09-16 00:42:50.555135 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-16 00:42:50.555144 | orchestrator | Tuesday 16 September 2025 00:42:48 +0000 (0:00:00.165) 0:00:43.762 ***** 2025-09-16 00:42:50.555153 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:50.555163 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:50.555173 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.555183 | orchestrator | 2025-09-16 00:42:50.555193 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-16 00:42:50.555202 | orchestrator | Tuesday 16 September 2025 00:42:48 +0000 (0:00:00.159) 0:00:43.921 ***** 2025-09-16 00:42:50.555211 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:50.555221 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:50.555231 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.555240 | orchestrator | 2025-09-16 00:42:50.555250 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-16 00:42:50.555259 | orchestrator | Tuesday 16 September 2025 00:42:48 +0000 (0:00:00.151) 0:00:44.072 ***** 2025-09-16 00:42:50.555269 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:50.555287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:50.555297 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.555307 | orchestrator | 2025-09-16 00:42:50.555323 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-16 00:42:50.555333 | orchestrator | Tuesday 16 September 2025 00:42:48 +0000 (0:00:00.144) 0:00:44.217 ***** 2025-09-16 00:42:50.555342 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:42:50.555352 | orchestrator | 2025-09-16 00:42:50.555362 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-16 00:42:50.555371 | orchestrator | Tuesday 16 September 2025 00:42:49 +0000 (0:00:00.502) 0:00:44.720 ***** 2025-09-16 00:42:50.555381 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:42:50.555390 | orchestrator | 2025-09-16 00:42:50.555400 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-16 00:42:50.555409 | orchestrator | Tuesday 16 September 2025 00:42:49 +0000 (0:00:00.498) 0:00:45.218 ***** 2025-09-16 00:42:50.555418 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:42:50.555428 | orchestrator | 2025-09-16 00:42:50.555437 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-16 00:42:50.555447 | orchestrator | Tuesday 16 September 2025 00:42:50 +0000 (0:00:00.158) 0:00:45.377 ***** 2025-09-16 00:42:50.555456 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'vg_name': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'}) 2025-09-16 00:42:50.555467 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'vg_name': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'}) 2025-09-16 00:42:50.555476 | orchestrator | 2025-09-16 00:42:50.555486 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-16 00:42:50.555495 | orchestrator | Tuesday 16 September 2025 00:42:50 +0000 (0:00:00.208) 0:00:45.585 ***** 2025-09-16 00:42:50.555505 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:50.555514 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:50.555524 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:50.555533 | orchestrator | 2025-09-16 00:42:50.555543 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-16 00:42:50.555552 | orchestrator | Tuesday 16 September 2025 00:42:50 +0000 (0:00:00.175) 0:00:45.760 ***** 2025-09-16 00:42:50.555562 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:50.555571 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:50.555586 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:56.449487 | orchestrator | 2025-09-16 00:42:56.449615 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-16 00:42:56.449632 | orchestrator | Tuesday 16 September 2025 00:42:50 +0000 (0:00:00.159) 0:00:45.920 ***** 2025-09-16 00:42:56.449645 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'})  2025-09-16 00:42:56.449702 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'})  2025-09-16 00:42:56.449716 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:42:56.449729 | orchestrator | 2025-09-16 00:42:56.449741 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-16 00:42:56.449752 | orchestrator | Tuesday 16 September 2025 00:42:50 +0000 (0:00:00.161) 0:00:46.082 ***** 2025-09-16 00:42:56.449828 | orchestrator | ok: [testbed-node-4] => { 2025-09-16 00:42:56.449840 | orchestrator |  "lvm_report": { 2025-09-16 00:42:56.449853 | orchestrator |  "lv": [ 2025-09-16 00:42:56.449865 | orchestrator |  { 2025-09-16 00:42:56.449876 | orchestrator |  "lv_name": "osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8", 2025-09-16 00:42:56.449888 | orchestrator |  "vg_name": "ceph-56010334-63d7-5603-a2fe-432c47d6dcb8" 2025-09-16 00:42:56.449899 | orchestrator |  }, 2025-09-16 00:42:56.449910 | orchestrator |  { 2025-09-16 00:42:56.449920 | orchestrator |  "lv_name": "osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e", 2025-09-16 00:42:56.449931 | orchestrator |  "vg_name": "ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e" 2025-09-16 00:42:56.449942 | orchestrator |  } 2025-09-16 00:42:56.449952 | orchestrator |  ], 2025-09-16 00:42:56.449963 | orchestrator |  "pv": [ 2025-09-16 00:42:56.449974 | orchestrator |  { 2025-09-16 00:42:56.449985 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-16 00:42:56.449996 | orchestrator |  "vg_name": "ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e" 2025-09-16 00:42:56.450006 | orchestrator |  }, 2025-09-16 00:42:56.450063 | orchestrator |  { 2025-09-16 00:42:56.450078 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-16 00:42:56.450090 | orchestrator |  "vg_name": "ceph-56010334-63d7-5603-a2fe-432c47d6dcb8" 2025-09-16 00:42:56.450103 | orchestrator |  } 2025-09-16 00:42:56.450115 | orchestrator |  ] 2025-09-16 00:42:56.450127 | orchestrator |  } 2025-09-16 00:42:56.450140 | orchestrator | } 2025-09-16 00:42:56.450153 | orchestrator | 2025-09-16 00:42:56.450165 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-16 00:42:56.450177 | orchestrator | 2025-09-16 00:42:56.450189 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-16 00:42:56.450201 | orchestrator | Tuesday 16 September 2025 00:42:51 +0000 (0:00:00.464) 0:00:46.547 ***** 2025-09-16 00:42:56.450214 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-16 00:42:56.450226 | orchestrator | 2025-09-16 00:42:56.450239 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-16 00:42:56.450251 | orchestrator | Tuesday 16 September 2025 00:42:51 +0000 (0:00:00.237) 0:00:46.785 ***** 2025-09-16 00:42:56.450264 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:42:56.450277 | orchestrator | 2025-09-16 00:42:56.450290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:56.450302 | orchestrator | Tuesday 16 September 2025 00:42:51 +0000 (0:00:00.234) 0:00:47.019 ***** 2025-09-16 00:42:56.450314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-16 00:42:56.450327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-16 00:42:56.450340 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-16 00:42:56.450352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-16 00:42:56.450364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-16 00:42:56.450377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-16 00:42:56.450387 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-16 00:42:56.450398 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-16 00:42:56.450408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-16 00:42:56.450419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-16 00:42:56.450430 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-16 00:42:56.450449 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-16 00:42:56.450460 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-16 00:42:56.450470 | orchestrator | 2025-09-16 00:42:56.450481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:56.450492 | orchestrator | Tuesday 16 September 2025 00:42:52 +0000 (0:00:00.398) 0:00:47.418 ***** 2025-09-16 00:42:56.450502 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:42:56.450517 | orchestrator | 2025-09-16 00:42:56.450528 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:56.450539 | orchestrator | Tuesday 16 September 2025 00:42:52 +0000 (0:00:00.186) 0:00:47.604 ***** 2025-09-16 00:42:56.450550 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:42:56.450561 | orchestrator | 2025-09-16 00:42:56.450572 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:56.450602 | orchestrator | Tuesday 16 September 2025 00:42:52 +0000 (0:00:00.215) 0:00:47.820 ***** 2025-09-16 00:42:56.450613 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:42:56.450624 | orchestrator | 2025-09-16 00:42:56.450635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:56.450646 | orchestrator | Tuesday 16 September 2025 00:42:52 +0000 (0:00:00.194) 0:00:48.015 ***** 2025-09-16 00:42:56.450657 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:42:56.450667 | orchestrator | 2025-09-16 00:42:56.450678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:56.450689 | orchestrator | Tuesday 16 September 2025 00:42:52 +0000 (0:00:00.207) 0:00:48.223 ***** 2025-09-16 00:42:56.450700 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:42:56.450710 | orchestrator | 2025-09-16 00:42:56.450791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:56.450805 | orchestrator | Tuesday 16 September 2025 00:42:53 +0000 (0:00:00.189) 0:00:48.412 ***** 2025-09-16 00:42:56.450816 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:42:56.450827 | orchestrator | 2025-09-16 00:42:56.450838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:56.450848 | orchestrator | Tuesday 16 September 2025 00:42:53 +0000 (0:00:00.541) 0:00:48.954 ***** 2025-09-16 00:42:56.450859 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:42:56.450870 | orchestrator | 2025-09-16 00:42:56.450880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:56.450891 | orchestrator | Tuesday 16 September 2025 00:42:53 +0000 (0:00:00.194) 0:00:49.148 ***** 2025-09-16 00:42:56.450902 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:42:56.450913 | orchestrator | 2025-09-16 00:42:56.450923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:56.450934 | orchestrator | Tuesday 16 September 2025 00:42:53 +0000 (0:00:00.202) 0:00:49.351 ***** 2025-09-16 00:42:56.450945 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595) 2025-09-16 00:42:56.450957 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595) 2025-09-16 00:42:56.450967 | orchestrator | 2025-09-16 00:42:56.450978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:56.450989 | orchestrator | Tuesday 16 September 2025 00:42:54 +0000 (0:00:00.406) 0:00:49.757 ***** 2025-09-16 00:42:56.451000 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a99d92e2-a7d0-4115-a3b5-db7bfa0170a9) 2025-09-16 00:42:56.451011 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a99d92e2-a7d0-4115-a3b5-db7bfa0170a9) 2025-09-16 00:42:56.451021 | orchestrator | 2025-09-16 00:42:56.451032 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:56.451043 | orchestrator | Tuesday 16 September 2025 00:42:54 +0000 (0:00:00.417) 0:00:50.174 ***** 2025-09-16 00:42:56.451066 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f8c86b93-6440-4cc6-ba3c-00ae05f2a443) 2025-09-16 00:42:56.451077 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f8c86b93-6440-4cc6-ba3c-00ae05f2a443) 2025-09-16 00:42:56.451088 | orchestrator | 2025-09-16 00:42:56.451099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:56.451110 | orchestrator | Tuesday 16 September 2025 00:42:55 +0000 (0:00:00.510) 0:00:50.685 ***** 2025-09-16 00:42:56.451120 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ad9de541-7002-4a51-9253-a212a9f46ca2) 2025-09-16 00:42:56.451131 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ad9de541-7002-4a51-9253-a212a9f46ca2) 2025-09-16 00:42:56.451142 | orchestrator | 2025-09-16 00:42:56.451152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-16 00:42:56.451163 | orchestrator | Tuesday 16 September 2025 00:42:55 +0000 (0:00:00.411) 0:00:51.096 ***** 2025-09-16 00:42:56.451174 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-16 00:42:56.451185 | orchestrator | 2025-09-16 00:42:56.451195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:42:56.451206 | orchestrator | Tuesday 16 September 2025 00:42:56 +0000 (0:00:00.314) 0:00:51.411 ***** 2025-09-16 00:42:56.451217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-16 00:42:56.451227 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-16 00:42:56.451238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-16 00:42:56.451248 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-16 00:42:56.451259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-16 00:42:56.451270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-16 00:42:56.451280 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-16 00:42:56.451291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-16 00:42:56.451301 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-16 00:42:56.451312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-16 00:42:56.451323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-16 00:42:56.451341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-16 00:43:05.268169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-16 00:43:05.268284 | orchestrator | 2025-09-16 00:43:05.268303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:43:05.268315 | orchestrator | Tuesday 16 September 2025 00:42:56 +0000 (0:00:00.397) 0:00:51.808 ***** 2025-09-16 00:43:05.268327 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.268339 | orchestrator | 2025-09-16 00:43:05.268350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:43:05.268362 | orchestrator | Tuesday 16 September 2025 00:42:56 +0000 (0:00:00.195) 0:00:52.004 ***** 2025-09-16 00:43:05.268373 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.268383 | orchestrator | 2025-09-16 00:43:05.268395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:43:05.268406 | orchestrator | Tuesday 16 September 2025 00:42:56 +0000 (0:00:00.209) 0:00:52.213 ***** 2025-09-16 00:43:05.268416 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.268427 | orchestrator | 2025-09-16 00:43:05.268438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:43:05.268470 | orchestrator | Tuesday 16 September 2025 00:42:57 +0000 (0:00:00.571) 0:00:52.784 ***** 2025-09-16 00:43:05.268482 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.268492 | orchestrator | 2025-09-16 00:43:05.268503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:43:05.268514 | orchestrator | Tuesday 16 September 2025 00:42:57 +0000 (0:00:00.202) 0:00:52.987 ***** 2025-09-16 00:43:05.268525 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.268536 | orchestrator | 2025-09-16 00:43:05.268546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:43:05.268557 | orchestrator | Tuesday 16 September 2025 00:42:57 +0000 (0:00:00.188) 0:00:53.175 ***** 2025-09-16 00:43:05.268568 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.268578 | orchestrator | 2025-09-16 00:43:05.268589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:43:05.268600 | orchestrator | Tuesday 16 September 2025 00:42:57 +0000 (0:00:00.194) 0:00:53.369 ***** 2025-09-16 00:43:05.268611 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.268621 | orchestrator | 2025-09-16 00:43:05.268632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:43:05.268643 | orchestrator | Tuesday 16 September 2025 00:42:58 +0000 (0:00:00.208) 0:00:53.578 ***** 2025-09-16 00:43:05.268654 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.268664 | orchestrator | 2025-09-16 00:43:05.268675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:43:05.268686 | orchestrator | Tuesday 16 September 2025 00:42:58 +0000 (0:00:00.186) 0:00:53.765 ***** 2025-09-16 00:43:05.268698 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-16 00:43:05.268712 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-16 00:43:05.268740 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-16 00:43:05.268753 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-16 00:43:05.268792 | orchestrator | 2025-09-16 00:43:05.268805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:43:05.268818 | orchestrator | Tuesday 16 September 2025 00:42:59 +0000 (0:00:00.702) 0:00:54.467 ***** 2025-09-16 00:43:05.268830 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.268843 | orchestrator | 2025-09-16 00:43:05.268855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:43:05.268868 | orchestrator | Tuesday 16 September 2025 00:42:59 +0000 (0:00:00.196) 0:00:54.663 ***** 2025-09-16 00:43:05.268880 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.268894 | orchestrator | 2025-09-16 00:43:05.268907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:43:05.268919 | orchestrator | Tuesday 16 September 2025 00:42:59 +0000 (0:00:00.214) 0:00:54.878 ***** 2025-09-16 00:43:05.268932 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.268944 | orchestrator | 2025-09-16 00:43:05.268957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-16 00:43:05.268970 | orchestrator | Tuesday 16 September 2025 00:42:59 +0000 (0:00:00.177) 0:00:55.055 ***** 2025-09-16 00:43:05.268982 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.268995 | orchestrator | 2025-09-16 00:43:05.269008 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-16 00:43:05.269020 | orchestrator | Tuesday 16 September 2025 00:42:59 +0000 (0:00:00.180) 0:00:55.236 ***** 2025-09-16 00:43:05.269033 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.269045 | orchestrator | 2025-09-16 00:43:05.269057 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-16 00:43:05.269067 | orchestrator | Tuesday 16 September 2025 00:43:00 +0000 (0:00:00.325) 0:00:55.561 ***** 2025-09-16 00:43:05.269078 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '457b984f-2001-5589-9984-9a697803acd2'}}) 2025-09-16 00:43:05.269090 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd2877fc6-62dc-51ad-b157-4c09a4f274b5'}}) 2025-09-16 00:43:05.269111 | orchestrator | 2025-09-16 00:43:05.269122 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-16 00:43:05.269133 | orchestrator | Tuesday 16 September 2025 00:43:00 +0000 (0:00:00.185) 0:00:55.747 ***** 2025-09-16 00:43:05.269145 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'}) 2025-09-16 00:43:05.269158 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'}) 2025-09-16 00:43:05.269168 | orchestrator | 2025-09-16 00:43:05.269180 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-16 00:43:05.269206 | orchestrator | Tuesday 16 September 2025 00:43:02 +0000 (0:00:01.841) 0:00:57.589 ***** 2025-09-16 00:43:05.269217 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:05.269230 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:05.269241 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.269251 | orchestrator | 2025-09-16 00:43:05.269262 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-16 00:43:05.269273 | orchestrator | Tuesday 16 September 2025 00:43:02 +0000 (0:00:00.150) 0:00:57.739 ***** 2025-09-16 00:43:05.269283 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'}) 2025-09-16 00:43:05.269294 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'}) 2025-09-16 00:43:05.269306 | orchestrator | 2025-09-16 00:43:05.269317 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-16 00:43:05.269327 | orchestrator | Tuesday 16 September 2025 00:43:03 +0000 (0:00:01.319) 0:00:59.058 ***** 2025-09-16 00:43:05.269338 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:05.269349 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:05.269360 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.269371 | orchestrator | 2025-09-16 00:43:05.269381 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-16 00:43:05.269392 | orchestrator | Tuesday 16 September 2025 00:43:03 +0000 (0:00:00.147) 0:00:59.205 ***** 2025-09-16 00:43:05.269403 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.269413 | orchestrator | 2025-09-16 00:43:05.269424 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-16 00:43:05.269434 | orchestrator | Tuesday 16 September 2025 00:43:03 +0000 (0:00:00.135) 0:00:59.341 ***** 2025-09-16 00:43:05.269445 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:05.269461 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:05.269473 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.269483 | orchestrator | 2025-09-16 00:43:05.269494 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-16 00:43:05.269505 | orchestrator | Tuesday 16 September 2025 00:43:04 +0000 (0:00:00.144) 0:00:59.485 ***** 2025-09-16 00:43:05.269516 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.269535 | orchestrator | 2025-09-16 00:43:05.269546 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-16 00:43:05.269557 | orchestrator | Tuesday 16 September 2025 00:43:04 +0000 (0:00:00.133) 0:00:59.619 ***** 2025-09-16 00:43:05.269567 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:05.269578 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:05.269589 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.269600 | orchestrator | 2025-09-16 00:43:05.269611 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-16 00:43:05.269621 | orchestrator | Tuesday 16 September 2025 00:43:04 +0000 (0:00:00.150) 0:00:59.769 ***** 2025-09-16 00:43:05.269632 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.269642 | orchestrator | 2025-09-16 00:43:05.269653 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-16 00:43:05.269664 | orchestrator | Tuesday 16 September 2025 00:43:04 +0000 (0:00:00.165) 0:00:59.935 ***** 2025-09-16 00:43:05.269674 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:05.269685 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:05.269696 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:05.269707 | orchestrator | 2025-09-16 00:43:05.269718 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-16 00:43:05.269728 | orchestrator | Tuesday 16 September 2025 00:43:04 +0000 (0:00:00.134) 0:01:00.069 ***** 2025-09-16 00:43:05.269739 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:43:05.269750 | orchestrator | 2025-09-16 00:43:05.269777 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-16 00:43:05.269789 | orchestrator | Tuesday 16 September 2025 00:43:05 +0000 (0:00:00.404) 0:01:00.474 ***** 2025-09-16 00:43:05.269807 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:11.299909 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:11.300021 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.300038 | orchestrator | 2025-09-16 00:43:11.300051 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-16 00:43:11.300064 | orchestrator | Tuesday 16 September 2025 00:43:05 +0000 (0:00:00.161) 0:01:00.636 ***** 2025-09-16 00:43:11.300076 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:11.300087 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:11.300098 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.300109 | orchestrator | 2025-09-16 00:43:11.300121 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-16 00:43:11.300132 | orchestrator | Tuesday 16 September 2025 00:43:05 +0000 (0:00:00.168) 0:01:00.804 ***** 2025-09-16 00:43:11.300143 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:11.300154 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:11.300165 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.300198 | orchestrator | 2025-09-16 00:43:11.300209 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-16 00:43:11.300220 | orchestrator | Tuesday 16 September 2025 00:43:05 +0000 (0:00:00.146) 0:01:00.951 ***** 2025-09-16 00:43:11.300231 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.300241 | orchestrator | 2025-09-16 00:43:11.300252 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-16 00:43:11.300263 | orchestrator | Tuesday 16 September 2025 00:43:05 +0000 (0:00:00.147) 0:01:01.099 ***** 2025-09-16 00:43:11.300273 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.300284 | orchestrator | 2025-09-16 00:43:11.300294 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-16 00:43:11.300305 | orchestrator | Tuesday 16 September 2025 00:43:05 +0000 (0:00:00.137) 0:01:01.237 ***** 2025-09-16 00:43:11.300315 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.300326 | orchestrator | 2025-09-16 00:43:11.300337 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-16 00:43:11.300347 | orchestrator | Tuesday 16 September 2025 00:43:05 +0000 (0:00:00.122) 0:01:01.359 ***** 2025-09-16 00:43:11.300358 | orchestrator | ok: [testbed-node-5] => { 2025-09-16 00:43:11.300369 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-16 00:43:11.300380 | orchestrator | } 2025-09-16 00:43:11.300391 | orchestrator | 2025-09-16 00:43:11.300401 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-16 00:43:11.300412 | orchestrator | Tuesday 16 September 2025 00:43:06 +0000 (0:00:00.135) 0:01:01.495 ***** 2025-09-16 00:43:11.300423 | orchestrator | ok: [testbed-node-5] => { 2025-09-16 00:43:11.300433 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-16 00:43:11.300444 | orchestrator | } 2025-09-16 00:43:11.300455 | orchestrator | 2025-09-16 00:43:11.300465 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-16 00:43:11.300477 | orchestrator | Tuesday 16 September 2025 00:43:06 +0000 (0:00:00.146) 0:01:01.642 ***** 2025-09-16 00:43:11.300487 | orchestrator | ok: [testbed-node-5] => { 2025-09-16 00:43:11.300498 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-16 00:43:11.300509 | orchestrator | } 2025-09-16 00:43:11.300519 | orchestrator | 2025-09-16 00:43:11.300530 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-16 00:43:11.300541 | orchestrator | Tuesday 16 September 2025 00:43:06 +0000 (0:00:00.158) 0:01:01.800 ***** 2025-09-16 00:43:11.300551 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:43:11.300562 | orchestrator | 2025-09-16 00:43:11.300573 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-16 00:43:11.300583 | orchestrator | Tuesday 16 September 2025 00:43:06 +0000 (0:00:00.547) 0:01:02.348 ***** 2025-09-16 00:43:11.300594 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:43:11.300604 | orchestrator | 2025-09-16 00:43:11.300615 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-16 00:43:11.300626 | orchestrator | Tuesday 16 September 2025 00:43:07 +0000 (0:00:00.507) 0:01:02.855 ***** 2025-09-16 00:43:11.300636 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:43:11.300647 | orchestrator | 2025-09-16 00:43:11.300657 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-16 00:43:11.300668 | orchestrator | Tuesday 16 September 2025 00:43:08 +0000 (0:00:00.734) 0:01:03.590 ***** 2025-09-16 00:43:11.300678 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:43:11.300689 | orchestrator | 2025-09-16 00:43:11.300699 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-16 00:43:11.300710 | orchestrator | Tuesday 16 September 2025 00:43:08 +0000 (0:00:00.168) 0:01:03.759 ***** 2025-09-16 00:43:11.300720 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.300731 | orchestrator | 2025-09-16 00:43:11.300741 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-16 00:43:11.300752 | orchestrator | Tuesday 16 September 2025 00:43:08 +0000 (0:00:00.126) 0:01:03.886 ***** 2025-09-16 00:43:11.300796 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.300807 | orchestrator | 2025-09-16 00:43:11.300818 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-16 00:43:11.300828 | orchestrator | Tuesday 16 September 2025 00:43:08 +0000 (0:00:00.124) 0:01:04.010 ***** 2025-09-16 00:43:11.300839 | orchestrator | ok: [testbed-node-5] => { 2025-09-16 00:43:11.300869 | orchestrator |  "vgs_report": { 2025-09-16 00:43:11.300881 | orchestrator |  "vg": [] 2025-09-16 00:43:11.300910 | orchestrator |  } 2025-09-16 00:43:11.300922 | orchestrator | } 2025-09-16 00:43:11.300934 | orchestrator | 2025-09-16 00:43:11.300944 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-16 00:43:11.300955 | orchestrator | Tuesday 16 September 2025 00:43:08 +0000 (0:00:00.160) 0:01:04.170 ***** 2025-09-16 00:43:11.300966 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.300977 | orchestrator | 2025-09-16 00:43:11.300988 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-16 00:43:11.300998 | orchestrator | Tuesday 16 September 2025 00:43:08 +0000 (0:00:00.164) 0:01:04.334 ***** 2025-09-16 00:43:11.301009 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.301019 | orchestrator | 2025-09-16 00:43:11.301030 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-16 00:43:11.301041 | orchestrator | Tuesday 16 September 2025 00:43:09 +0000 (0:00:00.121) 0:01:04.455 ***** 2025-09-16 00:43:11.301052 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.301062 | orchestrator | 2025-09-16 00:43:11.301073 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-16 00:43:11.301083 | orchestrator | Tuesday 16 September 2025 00:43:09 +0000 (0:00:00.124) 0:01:04.580 ***** 2025-09-16 00:43:11.301094 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.301105 | orchestrator | 2025-09-16 00:43:11.301115 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-16 00:43:11.301126 | orchestrator | Tuesday 16 September 2025 00:43:09 +0000 (0:00:00.133) 0:01:04.713 ***** 2025-09-16 00:43:11.301136 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.301147 | orchestrator | 2025-09-16 00:43:11.301158 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-16 00:43:11.301168 | orchestrator | Tuesday 16 September 2025 00:43:09 +0000 (0:00:00.135) 0:01:04.849 ***** 2025-09-16 00:43:11.301179 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.301190 | orchestrator | 2025-09-16 00:43:11.301200 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-16 00:43:11.301211 | orchestrator | Tuesday 16 September 2025 00:43:09 +0000 (0:00:00.118) 0:01:04.968 ***** 2025-09-16 00:43:11.301221 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.301232 | orchestrator | 2025-09-16 00:43:11.301242 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-16 00:43:11.301253 | orchestrator | Tuesday 16 September 2025 00:43:09 +0000 (0:00:00.130) 0:01:05.099 ***** 2025-09-16 00:43:11.301263 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.301274 | orchestrator | 2025-09-16 00:43:11.301285 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-16 00:43:11.301295 | orchestrator | Tuesday 16 September 2025 00:43:09 +0000 (0:00:00.127) 0:01:05.226 ***** 2025-09-16 00:43:11.301306 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.301316 | orchestrator | 2025-09-16 00:43:11.301327 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-16 00:43:11.301343 | orchestrator | Tuesday 16 September 2025 00:43:10 +0000 (0:00:00.335) 0:01:05.562 ***** 2025-09-16 00:43:11.301354 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.301365 | orchestrator | 2025-09-16 00:43:11.301375 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-16 00:43:11.301386 | orchestrator | Tuesday 16 September 2025 00:43:10 +0000 (0:00:00.140) 0:01:05.703 ***** 2025-09-16 00:43:11.301397 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.301414 | orchestrator | 2025-09-16 00:43:11.301425 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-16 00:43:11.301435 | orchestrator | Tuesday 16 September 2025 00:43:10 +0000 (0:00:00.124) 0:01:05.828 ***** 2025-09-16 00:43:11.301446 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.301457 | orchestrator | 2025-09-16 00:43:11.301467 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-16 00:43:11.301478 | orchestrator | Tuesday 16 September 2025 00:43:10 +0000 (0:00:00.134) 0:01:05.962 ***** 2025-09-16 00:43:11.301489 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.301499 | orchestrator | 2025-09-16 00:43:11.301510 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-16 00:43:11.301521 | orchestrator | Tuesday 16 September 2025 00:43:10 +0000 (0:00:00.128) 0:01:06.090 ***** 2025-09-16 00:43:11.301531 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.301542 | orchestrator | 2025-09-16 00:43:11.301553 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-16 00:43:11.301564 | orchestrator | Tuesday 16 September 2025 00:43:10 +0000 (0:00:00.128) 0:01:06.218 ***** 2025-09-16 00:43:11.301574 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:11.301586 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:11.301596 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.301607 | orchestrator | 2025-09-16 00:43:11.301618 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-16 00:43:11.301629 | orchestrator | Tuesday 16 September 2025 00:43:10 +0000 (0:00:00.149) 0:01:06.367 ***** 2025-09-16 00:43:11.301640 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:11.301650 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:11.301661 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:11.301672 | orchestrator | 2025-09-16 00:43:11.301683 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-16 00:43:11.301693 | orchestrator | Tuesday 16 September 2025 00:43:11 +0000 (0:00:00.153) 0:01:06.521 ***** 2025-09-16 00:43:11.301710 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:14.119527 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:14.119608 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:14.119617 | orchestrator | 2025-09-16 00:43:14.119624 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-16 00:43:14.119631 | orchestrator | Tuesday 16 September 2025 00:43:11 +0000 (0:00:00.146) 0:01:06.667 ***** 2025-09-16 00:43:14.119637 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:14.119643 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:14.119648 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:14.119654 | orchestrator | 2025-09-16 00:43:14.119659 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-16 00:43:14.119665 | orchestrator | Tuesday 16 September 2025 00:43:11 +0000 (0:00:00.147) 0:01:06.815 ***** 2025-09-16 00:43:14.119670 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:14.119692 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:14.119698 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:14.119703 | orchestrator | 2025-09-16 00:43:14.119709 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-16 00:43:14.119714 | orchestrator | Tuesday 16 September 2025 00:43:11 +0000 (0:00:00.129) 0:01:06.945 ***** 2025-09-16 00:43:14.119720 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:14.119725 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:14.119731 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:14.119736 | orchestrator | 2025-09-16 00:43:14.119790 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-16 00:43:14.119797 | orchestrator | Tuesday 16 September 2025 00:43:11 +0000 (0:00:00.139) 0:01:07.084 ***** 2025-09-16 00:43:14.119803 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:14.119808 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:14.119814 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:14.119819 | orchestrator | 2025-09-16 00:43:14.119825 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-16 00:43:14.119830 | orchestrator | Tuesday 16 September 2025 00:43:12 +0000 (0:00:00.327) 0:01:07.412 ***** 2025-09-16 00:43:14.119836 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:14.119841 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:14.119847 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:14.119852 | orchestrator | 2025-09-16 00:43:14.119857 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-16 00:43:14.119863 | orchestrator | Tuesday 16 September 2025 00:43:12 +0000 (0:00:00.151) 0:01:07.563 ***** 2025-09-16 00:43:14.119868 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:43:14.119875 | orchestrator | 2025-09-16 00:43:14.119880 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-16 00:43:14.119885 | orchestrator | Tuesday 16 September 2025 00:43:12 +0000 (0:00:00.502) 0:01:08.066 ***** 2025-09-16 00:43:14.119891 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:43:14.119896 | orchestrator | 2025-09-16 00:43:14.119901 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-16 00:43:14.119907 | orchestrator | Tuesday 16 September 2025 00:43:13 +0000 (0:00:00.503) 0:01:08.569 ***** 2025-09-16 00:43:14.119912 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:43:14.119917 | orchestrator | 2025-09-16 00:43:14.119923 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-16 00:43:14.119928 | orchestrator | Tuesday 16 September 2025 00:43:13 +0000 (0:00:00.142) 0:01:08.712 ***** 2025-09-16 00:43:14.119934 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'vg_name': 'ceph-457b984f-2001-5589-9984-9a697803acd2'}) 2025-09-16 00:43:14.119940 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'vg_name': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'}) 2025-09-16 00:43:14.119946 | orchestrator | 2025-09-16 00:43:14.119951 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-16 00:43:14.119964 | orchestrator | Tuesday 16 September 2025 00:43:13 +0000 (0:00:00.165) 0:01:08.877 ***** 2025-09-16 00:43:14.119982 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:14.119988 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:14.119993 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:14.119999 | orchestrator | 2025-09-16 00:43:14.120004 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-16 00:43:14.120009 | orchestrator | Tuesday 16 September 2025 00:43:13 +0000 (0:00:00.152) 0:01:09.029 ***** 2025-09-16 00:43:14.120015 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:14.120020 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:14.120026 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:14.120031 | orchestrator | 2025-09-16 00:43:14.120036 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-16 00:43:14.120042 | orchestrator | Tuesday 16 September 2025 00:43:13 +0000 (0:00:00.153) 0:01:09.183 ***** 2025-09-16 00:43:14.120047 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'})  2025-09-16 00:43:14.120052 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'})  2025-09-16 00:43:14.120058 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:14.120063 | orchestrator | 2025-09-16 00:43:14.120068 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-16 00:43:14.120074 | orchestrator | Tuesday 16 September 2025 00:43:13 +0000 (0:00:00.152) 0:01:09.336 ***** 2025-09-16 00:43:14.120079 | orchestrator | ok: [testbed-node-5] => { 2025-09-16 00:43:14.120085 | orchestrator |  "lvm_report": { 2025-09-16 00:43:14.120092 | orchestrator |  "lv": [ 2025-09-16 00:43:14.120098 | orchestrator |  { 2025-09-16 00:43:14.120105 | orchestrator |  "lv_name": "osd-block-457b984f-2001-5589-9984-9a697803acd2", 2025-09-16 00:43:14.120115 | orchestrator |  "vg_name": "ceph-457b984f-2001-5589-9984-9a697803acd2" 2025-09-16 00:43:14.120121 | orchestrator |  }, 2025-09-16 00:43:14.120127 | orchestrator |  { 2025-09-16 00:43:14.120134 | orchestrator |  "lv_name": "osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5", 2025-09-16 00:43:14.120140 | orchestrator |  "vg_name": "ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5" 2025-09-16 00:43:14.120146 | orchestrator |  } 2025-09-16 00:43:14.120152 | orchestrator |  ], 2025-09-16 00:43:14.120159 | orchestrator |  "pv": [ 2025-09-16 00:43:14.120165 | orchestrator |  { 2025-09-16 00:43:14.120171 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-16 00:43:14.120177 | orchestrator |  "vg_name": "ceph-457b984f-2001-5589-9984-9a697803acd2" 2025-09-16 00:43:14.120184 | orchestrator |  }, 2025-09-16 00:43:14.120190 | orchestrator |  { 2025-09-16 00:43:14.120196 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-16 00:43:14.120202 | orchestrator |  "vg_name": "ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5" 2025-09-16 00:43:14.120209 | orchestrator |  } 2025-09-16 00:43:14.120215 | orchestrator |  ] 2025-09-16 00:43:14.120221 | orchestrator |  } 2025-09-16 00:43:14.120227 | orchestrator | } 2025-09-16 00:43:14.120234 | orchestrator | 2025-09-16 00:43:14.120240 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:43:14.120250 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-16 00:43:14.120257 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-16 00:43:14.120263 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-16 00:43:14.120269 | orchestrator | 2025-09-16 00:43:14.120275 | orchestrator | 2025-09-16 00:43:14.120281 | orchestrator | 2025-09-16 00:43:14.120287 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:43:14.120294 | orchestrator | Tuesday 16 September 2025 00:43:14 +0000 (0:00:00.136) 0:01:09.473 ***** 2025-09-16 00:43:14.120300 | orchestrator | =============================================================================== 2025-09-16 00:43:14.120306 | orchestrator | Create block VGs -------------------------------------------------------- 5.66s 2025-09-16 00:43:14.120311 | orchestrator | Create block LVs -------------------------------------------------------- 3.99s 2025-09-16 00:43:14.120317 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.97s 2025-09-16 00:43:14.120323 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.79s 2025-09-16 00:43:14.120330 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.55s 2025-09-16 00:43:14.120336 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2025-09-16 00:43:14.120342 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.49s 2025-09-16 00:43:14.120348 | orchestrator | Add known partitions to the list of available block devices ------------- 1.39s 2025-09-16 00:43:14.120358 | orchestrator | Add known links to the list of available block devices ------------------ 1.17s 2025-09-16 00:43:14.465466 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2025-09-16 00:43:14.465550 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2025-09-16 00:43:14.465564 | orchestrator | Print LVM report data --------------------------------------------------- 0.86s 2025-09-16 00:43:14.465576 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-09-16 00:43:14.465586 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.69s 2025-09-16 00:43:14.465597 | orchestrator | Prepare variables for OSD count check ----------------------------------- 0.68s 2025-09-16 00:43:14.465608 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-09-16 00:43:14.465619 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.68s 2025-09-16 00:43:14.465629 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.67s 2025-09-16 00:43:14.465640 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-09-16 00:43:14.465651 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-09-16 00:43:26.664417 | orchestrator | 2025-09-16 00:43:26 | INFO  | Task 69f6fec2-c799-4595-90a1-2f218a7744cc (facts) was prepared for execution. 2025-09-16 00:43:26.664551 | orchestrator | 2025-09-16 00:43:26 | INFO  | It takes a moment until task 69f6fec2-c799-4595-90a1-2f218a7744cc (facts) has been started and output is visible here. 2025-09-16 00:43:38.554014 | orchestrator | 2025-09-16 00:43:38.554166 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-16 00:43:38.554181 | orchestrator | 2025-09-16 00:43:38.554191 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-16 00:43:38.554202 | orchestrator | Tuesday 16 September 2025 00:43:30 +0000 (0:00:00.241) 0:00:00.241 ***** 2025-09-16 00:43:38.554212 | orchestrator | ok: [testbed-manager] 2025-09-16 00:43:38.554222 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:43:38.554256 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:43:38.554265 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:43:38.554274 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:43:38.554283 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:43:38.554292 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:43:38.554301 | orchestrator | 2025-09-16 00:43:38.554310 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-16 00:43:38.554319 | orchestrator | Tuesday 16 September 2025 00:43:31 +0000 (0:00:00.888) 0:00:01.130 ***** 2025-09-16 00:43:38.554329 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:43:38.554338 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:43:38.554348 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:43:38.554357 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:43:38.554365 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:43:38.554375 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:43:38.554384 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:38.554393 | orchestrator | 2025-09-16 00:43:38.554402 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-16 00:43:38.554411 | orchestrator | 2025-09-16 00:43:38.554419 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-16 00:43:38.554428 | orchestrator | Tuesday 16 September 2025 00:43:32 +0000 (0:00:01.062) 0:00:02.193 ***** 2025-09-16 00:43:38.554437 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:43:38.554446 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:43:38.554455 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:43:38.554464 | orchestrator | ok: [testbed-manager] 2025-09-16 00:43:38.554473 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:43:38.554481 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:43:38.554490 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:43:38.554499 | orchestrator | 2025-09-16 00:43:38.554508 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-16 00:43:38.554517 | orchestrator | 2025-09-16 00:43:38.554525 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-16 00:43:38.554534 | orchestrator | Tuesday 16 September 2025 00:43:37 +0000 (0:00:05.524) 0:00:07.717 ***** 2025-09-16 00:43:38.554543 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:43:38.554552 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:43:38.554561 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:43:38.554571 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:43:38.554580 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:43:38.554589 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:43:38.554598 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:43:38.554608 | orchestrator | 2025-09-16 00:43:38.554618 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:43:38.554628 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:43:38.554639 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:43:38.554649 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:43:38.554659 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:43:38.554669 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:43:38.554679 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:43:38.554688 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:43:38.554705 | orchestrator | 2025-09-16 00:43:38.554714 | orchestrator | 2025-09-16 00:43:38.554723 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:43:38.554732 | orchestrator | Tuesday 16 September 2025 00:43:38 +0000 (0:00:00.448) 0:00:08.166 ***** 2025-09-16 00:43:38.554741 | orchestrator | =============================================================================== 2025-09-16 00:43:38.554769 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.52s 2025-09-16 00:43:38.554779 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2025-09-16 00:43:38.554789 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.89s 2025-09-16 00:43:38.554798 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2025-09-16 00:43:50.466946 | orchestrator | 2025-09-16 00:43:50 | INFO  | Task 4a91492a-dc60-4ace-baab-3e17a09513d2 (frr) was prepared for execution. 2025-09-16 00:43:50.467076 | orchestrator | 2025-09-16 00:43:50 | INFO  | It takes a moment until task 4a91492a-dc60-4ace-baab-3e17a09513d2 (frr) has been started and output is visible here. 2025-09-16 00:44:12.751426 | orchestrator | 2025-09-16 00:44:12.751542 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-16 00:44:12.751560 | orchestrator | 2025-09-16 00:44:12.751572 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-16 00:44:12.751584 | orchestrator | Tuesday 16 September 2025 00:43:54 +0000 (0:00:00.214) 0:00:00.214 ***** 2025-09-16 00:44:12.751615 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-16 00:44:12.751628 | orchestrator | 2025-09-16 00:44:12.751639 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-16 00:44:12.751650 | orchestrator | Tuesday 16 September 2025 00:43:54 +0000 (0:00:00.200) 0:00:00.414 ***** 2025-09-16 00:44:12.751661 | orchestrator | changed: [testbed-manager] 2025-09-16 00:44:12.751673 | orchestrator | 2025-09-16 00:44:12.751684 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-16 00:44:12.751695 | orchestrator | Tuesday 16 September 2025 00:43:55 +0000 (0:00:01.024) 0:00:01.439 ***** 2025-09-16 00:44:12.751706 | orchestrator | changed: [testbed-manager] 2025-09-16 00:44:12.751717 | orchestrator | 2025-09-16 00:44:12.751735 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-16 00:44:12.751786 | orchestrator | Tuesday 16 September 2025 00:44:03 +0000 (0:00:08.076) 0:00:09.515 ***** 2025-09-16 00:44:12.751799 | orchestrator | ok: [testbed-manager] 2025-09-16 00:44:12.751811 | orchestrator | 2025-09-16 00:44:12.751822 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-16 00:44:12.751833 | orchestrator | Tuesday 16 September 2025 00:44:04 +0000 (0:00:01.097) 0:00:10.613 ***** 2025-09-16 00:44:12.751843 | orchestrator | changed: [testbed-manager] 2025-09-16 00:44:12.751854 | orchestrator | 2025-09-16 00:44:12.751865 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-16 00:44:12.751876 | orchestrator | Tuesday 16 September 2025 00:44:05 +0000 (0:00:00.852) 0:00:11.465 ***** 2025-09-16 00:44:12.751887 | orchestrator | ok: [testbed-manager] 2025-09-16 00:44:12.751897 | orchestrator | 2025-09-16 00:44:12.751908 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-16 00:44:12.751919 | orchestrator | Tuesday 16 September 2025 00:44:06 +0000 (0:00:01.007) 0:00:12.473 ***** 2025-09-16 00:44:12.751930 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-16 00:44:12.751941 | orchestrator | 2025-09-16 00:44:12.751952 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-16 00:44:12.751963 | orchestrator | Tuesday 16 September 2025 00:44:07 +0000 (0:00:00.728) 0:00:13.202 ***** 2025-09-16 00:44:12.751975 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:44:12.751988 | orchestrator | 2025-09-16 00:44:12.752001 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-16 00:44:12.752037 | orchestrator | Tuesday 16 September 2025 00:44:07 +0000 (0:00:00.152) 0:00:13.354 ***** 2025-09-16 00:44:12.752050 | orchestrator | changed: [testbed-manager] 2025-09-16 00:44:12.752062 | orchestrator | 2025-09-16 00:44:12.752075 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-16 00:44:12.752086 | orchestrator | Tuesday 16 September 2025 00:44:08 +0000 (0:00:00.867) 0:00:14.222 ***** 2025-09-16 00:44:12.752099 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-16 00:44:12.752111 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-16 00:44:12.752125 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-16 00:44:12.752138 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-16 00:44:12.752151 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-16 00:44:12.752163 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-16 00:44:12.752175 | orchestrator | 2025-09-16 00:44:12.752188 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-16 00:44:12.752200 | orchestrator | Tuesday 16 September 2025 00:44:10 +0000 (0:00:01.958) 0:00:16.180 ***** 2025-09-16 00:44:12.752213 | orchestrator | ok: [testbed-manager] 2025-09-16 00:44:12.752225 | orchestrator | 2025-09-16 00:44:12.752237 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-16 00:44:12.752250 | orchestrator | Tuesday 16 September 2025 00:44:11 +0000 (0:00:01.195) 0:00:17.376 ***** 2025-09-16 00:44:12.752262 | orchestrator | changed: [testbed-manager] 2025-09-16 00:44:12.752274 | orchestrator | 2025-09-16 00:44:12.752286 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:44:12.752299 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 00:44:12.752312 | orchestrator | 2025-09-16 00:44:12.752325 | orchestrator | 2025-09-16 00:44:12.752335 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:44:12.752346 | orchestrator | Tuesday 16 September 2025 00:44:12 +0000 (0:00:01.301) 0:00:18.677 ***** 2025-09-16 00:44:12.752357 | orchestrator | =============================================================================== 2025-09-16 00:44:12.752368 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.08s 2025-09-16 00:44:12.752378 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 1.96s 2025-09-16 00:44:12.752389 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.30s 2025-09-16 00:44:12.752400 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.20s 2025-09-16 00:44:12.752427 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.10s 2025-09-16 00:44:12.752439 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.02s 2025-09-16 00:44:12.752450 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.01s 2025-09-16 00:44:12.752461 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.87s 2025-09-16 00:44:12.752471 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.85s 2025-09-16 00:44:12.752482 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.73s 2025-09-16 00:44:12.752493 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2025-09-16 00:44:12.752504 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.15s 2025-09-16 00:44:12.948525 | orchestrator | 2025-09-16 00:44:12.951927 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Sep 16 00:44:12 UTC 2025 2025-09-16 00:44:12.951976 | orchestrator | 2025-09-16 00:44:14.617402 | orchestrator | 2025-09-16 00:44:14 | INFO  | Collection nutshell is prepared for execution 2025-09-16 00:44:14.617504 | orchestrator | 2025-09-16 00:44:14 | INFO  | D [0] - dotfiles 2025-09-16 00:44:24.710898 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [0] - homer 2025-09-16 00:44:24.711015 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [0] - netdata 2025-09-16 00:44:24.711032 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [0] - openstackclient 2025-09-16 00:44:24.711045 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [0] - phpmyadmin 2025-09-16 00:44:24.711056 | orchestrator | 2025-09-16 00:44:24 | INFO  | A [0] - common 2025-09-16 00:44:24.714158 | orchestrator | 2025-09-16 00:44:24 | INFO  | A [1] -- loadbalancer 2025-09-16 00:44:24.714282 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [2] --- opensearch 2025-09-16 00:44:24.714600 | orchestrator | 2025-09-16 00:44:24 | INFO  | A [2] --- mariadb-ng 2025-09-16 00:44:24.714813 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [3] ---- horizon 2025-09-16 00:44:24.715077 | orchestrator | 2025-09-16 00:44:24 | INFO  | A [3] ---- keystone 2025-09-16 00:44:24.715417 | orchestrator | 2025-09-16 00:44:24 | INFO  | A [4] ----- neutron 2025-09-16 00:44:24.715739 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [5] ------ wait-for-nova 2025-09-16 00:44:24.716423 | orchestrator | 2025-09-16 00:44:24 | INFO  | A [5] ------ octavia 2025-09-16 00:44:24.717616 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [4] ----- barbican 2025-09-16 00:44:24.717634 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [4] ----- designate 2025-09-16 00:44:24.717975 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [4] ----- ironic 2025-09-16 00:44:24.717993 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [4] ----- placement 2025-09-16 00:44:24.718290 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [4] ----- magnum 2025-09-16 00:44:24.718953 | orchestrator | 2025-09-16 00:44:24 | INFO  | A [1] -- openvswitch 2025-09-16 00:44:24.718971 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [2] --- ovn 2025-09-16 00:44:24.719231 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [1] -- memcached 2025-09-16 00:44:24.719390 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [1] -- redis 2025-09-16 00:44:24.719628 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [1] -- rabbitmq-ng 2025-09-16 00:44:24.719911 | orchestrator | 2025-09-16 00:44:24 | INFO  | A [0] - kubernetes 2025-09-16 00:44:24.722893 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [1] -- kubeconfig 2025-09-16 00:44:24.722917 | orchestrator | 2025-09-16 00:44:24 | INFO  | A [1] -- copy-kubeconfig 2025-09-16 00:44:24.723708 | orchestrator | 2025-09-16 00:44:24 | INFO  | A [0] - ceph 2025-09-16 00:44:24.725249 | orchestrator | 2025-09-16 00:44:24 | INFO  | A [1] -- ceph-pools 2025-09-16 00:44:24.725488 | orchestrator | 2025-09-16 00:44:24 | INFO  | A [2] --- copy-ceph-keys 2025-09-16 00:44:24.725507 | orchestrator | 2025-09-16 00:44:24 | INFO  | A [3] ---- cephclient 2025-09-16 00:44:24.725956 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-16 00:44:24.725976 | orchestrator | 2025-09-16 00:44:24 | INFO  | A [4] ----- wait-for-keystone 2025-09-16 00:44:24.726526 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-16 00:44:24.726547 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [5] ------ glance 2025-09-16 00:44:24.727077 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [5] ------ cinder 2025-09-16 00:44:24.727096 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [5] ------ nova 2025-09-16 00:44:24.727137 | orchestrator | 2025-09-16 00:44:24 | INFO  | A [4] ----- prometheus 2025-09-16 00:44:24.727463 | orchestrator | 2025-09-16 00:44:24 | INFO  | D [5] ------ grafana 2025-09-16 00:44:24.945037 | orchestrator | 2025-09-16 00:44:24 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-16 00:44:24.945119 | orchestrator | 2025-09-16 00:44:24 | INFO  | Tasks are running in the background 2025-09-16 00:44:28.256357 | orchestrator | 2025-09-16 00:44:28 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-16 00:44:30.397370 | orchestrator | 2025-09-16 00:44:30 | INFO  | Task e9e1dc2e-bc13-483d-90a6-649d6c133a72 is in state STARTED 2025-09-16 00:44:30.397985 | orchestrator | 2025-09-16 00:44:30 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:44:30.401112 | orchestrator | 2025-09-16 00:44:30 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:44:30.402343 | orchestrator | 2025-09-16 00:44:30 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:44:30.403274 | orchestrator | 2025-09-16 00:44:30 | INFO  | Task 80dc54df-5dda-4ca3-9a98-986bf032a952 is in state STARTED 2025-09-16 00:44:30.404120 | orchestrator | 2025-09-16 00:44:30 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:44:30.405010 | orchestrator | 2025-09-16 00:44:30 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:44:30.405032 | orchestrator | 2025-09-16 00:44:30 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:44:33.451458 | orchestrator | 2025-09-16 00:44:33 | INFO  | Task e9e1dc2e-bc13-483d-90a6-649d6c133a72 is in state STARTED 2025-09-16 00:44:33.454949 | orchestrator | 2025-09-16 00:44:33 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:44:33.455512 | orchestrator | 2025-09-16 00:44:33 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:44:33.456265 | orchestrator | 2025-09-16 00:44:33 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:44:33.458131 | orchestrator | 2025-09-16 00:44:33 | INFO  | Task 80dc54df-5dda-4ca3-9a98-986bf032a952 is in state STARTED 2025-09-16 00:44:33.462944 | orchestrator | 2025-09-16 00:44:33 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:44:33.465943 | orchestrator | 2025-09-16 00:44:33 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:44:33.465963 | orchestrator | 2025-09-16 00:44:33 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:44:36.489029 | orchestrator | 2025-09-16 00:44:36 | INFO  | Task e9e1dc2e-bc13-483d-90a6-649d6c133a72 is in state STARTED 2025-09-16 00:44:36.491014 | orchestrator | 2025-09-16 00:44:36 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:44:36.491534 | orchestrator | 2025-09-16 00:44:36 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:44:36.497444 | orchestrator | 2025-09-16 00:44:36 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:44:36.497465 | orchestrator | 2025-09-16 00:44:36 | INFO  | Task 80dc54df-5dda-4ca3-9a98-986bf032a952 is in state STARTED 2025-09-16 00:44:36.497477 | orchestrator | 2025-09-16 00:44:36 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:44:36.497488 | orchestrator | 2025-09-16 00:44:36 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:44:36.497499 | orchestrator | 2025-09-16 00:44:36 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:44:39.528643 | orchestrator | 2025-09-16 00:44:39 | INFO  | Task e9e1dc2e-bc13-483d-90a6-649d6c133a72 is in state STARTED 2025-09-16 00:44:39.531299 | orchestrator | 2025-09-16 00:44:39 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:44:39.531329 | orchestrator | 2025-09-16 00:44:39 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:44:39.532523 | orchestrator | 2025-09-16 00:44:39 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:44:39.535823 | orchestrator | 2025-09-16 00:44:39 | INFO  | Task 80dc54df-5dda-4ca3-9a98-986bf032a952 is in state STARTED 2025-09-16 00:44:39.536877 | orchestrator | 2025-09-16 00:44:39 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:44:39.536896 | orchestrator | 2025-09-16 00:44:39 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:44:39.537171 | orchestrator | 2025-09-16 00:44:39 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:44:42.648403 | orchestrator | 2025-09-16 00:44:42 | INFO  | Task e9e1dc2e-bc13-483d-90a6-649d6c133a72 is in state STARTED 2025-09-16 00:44:42.648880 | orchestrator | 2025-09-16 00:44:42 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:44:42.650226 | orchestrator | 2025-09-16 00:44:42 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:44:42.697640 | orchestrator | 2025-09-16 00:44:42 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:44:42.697695 | orchestrator | 2025-09-16 00:44:42 | INFO  | Task 80dc54df-5dda-4ca3-9a98-986bf032a952 is in state STARTED 2025-09-16 00:44:42.697705 | orchestrator | 2025-09-16 00:44:42 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:44:42.697714 | orchestrator | 2025-09-16 00:44:42 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:44:42.697724 | orchestrator | 2025-09-16 00:44:42 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:44:45.749916 | orchestrator | 2025-09-16 00:44:45 | INFO  | Task e9e1dc2e-bc13-483d-90a6-649d6c133a72 is in state STARTED 2025-09-16 00:44:45.752263 | orchestrator | 2025-09-16 00:44:45 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:44:45.753724 | orchestrator | 2025-09-16 00:44:45 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:44:45.755379 | orchestrator | 2025-09-16 00:44:45 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:44:45.755525 | orchestrator | 2025-09-16 00:44:45 | INFO  | Task 80dc54df-5dda-4ca3-9a98-986bf032a952 is in state STARTED 2025-09-16 00:44:45.756797 | orchestrator | 2025-09-16 00:44:45 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:44:45.761221 | orchestrator | 2025-09-16 00:44:45 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:44:45.761247 | orchestrator | 2025-09-16 00:44:45 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:44:48.871394 | orchestrator | 2025-09-16 00:44:48 | INFO  | Task e9e1dc2e-bc13-483d-90a6-649d6c133a72 is in state STARTED 2025-09-16 00:44:48.871476 | orchestrator | 2025-09-16 00:44:48 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:44:48.871487 | orchestrator | 2025-09-16 00:44:48 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:44:48.873306 | orchestrator | 2025-09-16 00:44:48 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:44:48.875168 | orchestrator | 2025-09-16 00:44:48 | INFO  | Task 80dc54df-5dda-4ca3-9a98-986bf032a952 is in state STARTED 2025-09-16 00:44:48.877710 | orchestrator | 2025-09-16 00:44:48 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:44:48.882046 | orchestrator | 2025-09-16 00:44:48 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:44:48.882060 | orchestrator | 2025-09-16 00:44:48 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:44:51.946910 | orchestrator | 2025-09-16 00:44:51.947018 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-16 00:44:51.947035 | orchestrator | 2025-09-16 00:44:51.947047 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-16 00:44:51.947059 | orchestrator | Tuesday 16 September 2025 00:44:38 +0000 (0:00:01.059) 0:00:01.059 ***** 2025-09-16 00:44:51.947071 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:44:51.947083 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:44:51.947094 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:44:51.947105 | orchestrator | changed: [testbed-manager] 2025-09-16 00:44:51.947115 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:44:51.947126 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:44:51.947137 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:44:51.947147 | orchestrator | 2025-09-16 00:44:51.947158 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-16 00:44:51.947170 | orchestrator | Tuesday 16 September 2025 00:44:42 +0000 (0:00:03.702) 0:00:04.762 ***** 2025-09-16 00:44:51.947181 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-16 00:44:51.947192 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-16 00:44:51.947203 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-16 00:44:51.947214 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-16 00:44:51.947225 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-16 00:44:51.947235 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-16 00:44:51.947246 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-16 00:44:51.947257 | orchestrator | 2025-09-16 00:44:51.947268 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-16 00:44:51.947280 | orchestrator | Tuesday 16 September 2025 00:44:44 +0000 (0:00:02.156) 0:00:06.919 ***** 2025-09-16 00:44:51.947305 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-16 00:44:43.494715', 'end': '2025-09-16 00:44:44.504614', 'delta': '0:00:01.009899', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-16 00:44:51.947322 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-16 00:44:43.384771', 'end': '2025-09-16 00:44:43.388887', 'delta': '0:00:00.004116', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-16 00:44:51.947356 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-16 00:44:43.469977', 'end': '2025-09-16 00:44:43.476381', 'delta': '0:00:00.006404', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-16 00:44:51.947400 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-16 00:44:43.470241', 'end': '2025-09-16 00:44:43.477900', 'delta': '0:00:00.007659', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-16 00:44:51.947415 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-16 00:44:43.469620', 'end': '2025-09-16 00:44:43.476497', 'delta': '0:00:00.006877', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-16 00:44:51.947717 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-16 00:44:43.491708', 'end': '2025-09-16 00:44:43.497587', 'delta': '0:00:00.005879', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-16 00:44:51.947733 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-16 00:44:43.468770', 'end': '2025-09-16 00:44:43.474927', 'delta': '0:00:00.006157', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-16 00:44:51.947786 | orchestrator | 2025-09-16 00:44:51.947799 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-16 00:44:51.947810 | orchestrator | Tuesday 16 September 2025 00:44:46 +0000 (0:00:01.597) 0:00:08.517 ***** 2025-09-16 00:44:51.947821 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-16 00:44:51.947832 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-16 00:44:51.947844 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-16 00:44:51.947854 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-16 00:44:51.947865 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-16 00:44:51.947876 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-16 00:44:51.947887 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-16 00:44:51.947898 | orchestrator | 2025-09-16 00:44:51.947909 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-16 00:44:51.947920 | orchestrator | Tuesday 16 September 2025 00:44:47 +0000 (0:00:01.024) 0:00:09.541 ***** 2025-09-16 00:44:51.947936 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-16 00:44:51.947947 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-16 00:44:51.947958 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-16 00:44:51.947969 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-16 00:44:51.947980 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-16 00:44:51.947991 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-16 00:44:51.948002 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-16 00:44:51.948013 | orchestrator | 2025-09-16 00:44:51.948024 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:44:51.948045 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:44:51.948059 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:44:51.948070 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:44:51.948082 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:44:51.948092 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:44:51.948103 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:44:51.948114 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:44:51.948125 | orchestrator | 2025-09-16 00:44:51.948136 | orchestrator | 2025-09-16 00:44:51.948147 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:44:51.948158 | orchestrator | Tuesday 16 September 2025 00:44:50 +0000 (0:00:03.685) 0:00:13.226 ***** 2025-09-16 00:44:51.948169 | orchestrator | =============================================================================== 2025-09-16 00:44:51.948180 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.70s 2025-09-16 00:44:51.948191 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.69s 2025-09-16 00:44:51.948209 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.16s 2025-09-16 00:44:51.948220 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.60s 2025-09-16 00:44:51.948231 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.02s 2025-09-16 00:44:51.948242 | orchestrator | 2025-09-16 00:44:51 | INFO  | Task e9e1dc2e-bc13-483d-90a6-649d6c133a72 is in state SUCCESS 2025-09-16 00:44:51.948952 | orchestrator | 2025-09-16 00:44:51 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:44:51.951308 | orchestrator | 2025-09-16 00:44:51 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:44:51.977653 | orchestrator | 2025-09-16 00:44:51 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:44:52.162302 | orchestrator | 2025-09-16 00:44:52 | INFO  | Task 80dc54df-5dda-4ca3-9a98-986bf032a952 is in state STARTED 2025-09-16 00:44:52.236461 | orchestrator | 2025-09-16 00:44:52 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:44:52.236526 | orchestrator | 2025-09-16 00:44:52 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:44:52.236540 | orchestrator | 2025-09-16 00:44:52 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:44:55.466899 | orchestrator | 2025-09-16 00:44:55 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:44:55.467011 | orchestrator | 2025-09-16 00:44:55 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:44:55.467028 | orchestrator | 2025-09-16 00:44:55 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:44:55.467041 | orchestrator | 2025-09-16 00:44:55 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:44:55.467052 | orchestrator | 2025-09-16 00:44:55 | INFO  | Task 80dc54df-5dda-4ca3-9a98-986bf032a952 is in state STARTED 2025-09-16 00:44:55.467063 | orchestrator | 2025-09-16 00:44:55 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:44:55.467225 | orchestrator | 2025-09-16 00:44:55 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:44:55.467327 | orchestrator | 2025-09-16 00:44:55 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:44:58.702291 | orchestrator | 2025-09-16 00:44:58 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:44:58.702393 | orchestrator | 2025-09-16 00:44:58 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:44:58.702408 | orchestrator | 2025-09-16 00:44:58 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:44:58.702420 | orchestrator | 2025-09-16 00:44:58 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:44:58.702431 | orchestrator | 2025-09-16 00:44:58 | INFO  | Task 80dc54df-5dda-4ca3-9a98-986bf032a952 is in state STARTED 2025-09-16 00:44:58.702441 | orchestrator | 2025-09-16 00:44:58 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:44:58.702452 | orchestrator | 2025-09-16 00:44:58 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:44:58.702464 | orchestrator | 2025-09-16 00:44:58 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:01.614578 | orchestrator | 2025-09-16 00:45:01 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:01.614929 | orchestrator | 2025-09-16 00:45:01 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:01.615832 | orchestrator | 2025-09-16 00:45:01 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:45:01.616520 | orchestrator | 2025-09-16 00:45:01 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:01.617284 | orchestrator | 2025-09-16 00:45:01 | INFO  | Task 80dc54df-5dda-4ca3-9a98-986bf032a952 is in state STARTED 2025-09-16 00:45:01.617947 | orchestrator | 2025-09-16 00:45:01 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:01.618656 | orchestrator | 2025-09-16 00:45:01 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:45:01.618683 | orchestrator | 2025-09-16 00:45:01 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:04.672731 | orchestrator | 2025-09-16 00:45:04 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:04.678161 | orchestrator | 2025-09-16 00:45:04 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:04.679704 | orchestrator | 2025-09-16 00:45:04 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:45:04.681714 | orchestrator | 2025-09-16 00:45:04 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:04.686121 | orchestrator | 2025-09-16 00:45:04 | INFO  | Task 80dc54df-5dda-4ca3-9a98-986bf032a952 is in state STARTED 2025-09-16 00:45:04.688901 | orchestrator | 2025-09-16 00:45:04 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:04.693004 | orchestrator | 2025-09-16 00:45:04 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:45:04.693029 | orchestrator | 2025-09-16 00:45:04 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:07.774739 | orchestrator | 2025-09-16 00:45:07 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:07.777878 | orchestrator | 2025-09-16 00:45:07 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:07.778730 | orchestrator | 2025-09-16 00:45:07 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:45:07.782788 | orchestrator | 2025-09-16 00:45:07 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:07.783499 | orchestrator | 2025-09-16 00:45:07 | INFO  | Task 80dc54df-5dda-4ca3-9a98-986bf032a952 is in state STARTED 2025-09-16 00:45:07.784280 | orchestrator | 2025-09-16 00:45:07 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:07.785038 | orchestrator | 2025-09-16 00:45:07 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:45:07.785538 | orchestrator | 2025-09-16 00:45:07 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:10.942695 | orchestrator | 2025-09-16 00:45:10 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:10.942809 | orchestrator | 2025-09-16 00:45:10 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:10.942823 | orchestrator | 2025-09-16 00:45:10 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:45:10.942847 | orchestrator | 2025-09-16 00:45:10 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:10.942857 | orchestrator | 2025-09-16 00:45:10 | INFO  | Task 80dc54df-5dda-4ca3-9a98-986bf032a952 is in state STARTED 2025-09-16 00:45:10.942867 | orchestrator | 2025-09-16 00:45:10 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:10.942894 | orchestrator | 2025-09-16 00:45:10 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:45:10.942904 | orchestrator | 2025-09-16 00:45:10 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:14.027815 | orchestrator | 2025-09-16 00:45:14 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:14.030451 | orchestrator | 2025-09-16 00:45:14 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:14.030528 | orchestrator | 2025-09-16 00:45:14 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:45:14.031698 | orchestrator | 2025-09-16 00:45:14 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:14.032056 | orchestrator | 2025-09-16 00:45:14 | INFO  | Task 80dc54df-5dda-4ca3-9a98-986bf032a952 is in state SUCCESS 2025-09-16 00:45:14.032655 | orchestrator | 2025-09-16 00:45:14 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:14.033599 | orchestrator | 2025-09-16 00:45:14 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:45:14.033621 | orchestrator | 2025-09-16 00:45:14 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:17.072783 | orchestrator | 2025-09-16 00:45:17 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:17.073013 | orchestrator | 2025-09-16 00:45:17 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:17.073042 | orchestrator | 2025-09-16 00:45:17 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:45:17.073490 | orchestrator | 2025-09-16 00:45:17 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:17.155519 | orchestrator | 2025-09-16 00:45:17 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:17.155548 | orchestrator | 2025-09-16 00:45:17 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:45:17.155558 | orchestrator | 2025-09-16 00:45:17 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:20.285729 | orchestrator | 2025-09-16 00:45:20 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:20.285885 | orchestrator | 2025-09-16 00:45:20 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:20.285901 | orchestrator | 2025-09-16 00:45:20 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:45:20.285913 | orchestrator | 2025-09-16 00:45:20 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:20.285924 | orchestrator | 2025-09-16 00:45:20 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:20.285935 | orchestrator | 2025-09-16 00:45:20 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:45:20.285946 | orchestrator | 2025-09-16 00:45:20 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:23.298330 | orchestrator | 2025-09-16 00:45:23 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:23.298428 | orchestrator | 2025-09-16 00:45:23 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:23.299408 | orchestrator | 2025-09-16 00:45:23 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:45:23.299432 | orchestrator | 2025-09-16 00:45:23 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:23.300183 | orchestrator | 2025-09-16 00:45:23 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:23.300735 | orchestrator | 2025-09-16 00:45:23 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state STARTED 2025-09-16 00:45:23.300800 | orchestrator | 2025-09-16 00:45:23 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:26.335472 | orchestrator | 2025-09-16 00:45:26 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:26.337686 | orchestrator | 2025-09-16 00:45:26 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:26.339333 | orchestrator | 2025-09-16 00:45:26 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:45:26.340975 | orchestrator | 2025-09-16 00:45:26 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:26.342350 | orchestrator | 2025-09-16 00:45:26 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:26.343312 | orchestrator | 2025-09-16 00:45:26 | INFO  | Task 319a4377-c492-4792-bcec-1629e747e37d is in state SUCCESS 2025-09-16 00:45:26.343336 | orchestrator | 2025-09-16 00:45:26 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:29.442077 | orchestrator | 2025-09-16 00:45:29 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:29.442168 | orchestrator | 2025-09-16 00:45:29 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:29.442181 | orchestrator | 2025-09-16 00:45:29 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:45:29.442191 | orchestrator | 2025-09-16 00:45:29 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:29.445655 | orchestrator | 2025-09-16 00:45:29 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:29.445696 | orchestrator | 2025-09-16 00:45:29 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:32.487078 | orchestrator | 2025-09-16 00:45:32 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:32.487204 | orchestrator | 2025-09-16 00:45:32 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:32.487227 | orchestrator | 2025-09-16 00:45:32 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:45:32.490421 | orchestrator | 2025-09-16 00:45:32 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:32.495907 | orchestrator | 2025-09-16 00:45:32 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:32.495942 | orchestrator | 2025-09-16 00:45:32 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:35.544142 | orchestrator | 2025-09-16 00:45:35 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:35.544246 | orchestrator | 2025-09-16 00:45:35 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:35.546250 | orchestrator | 2025-09-16 00:45:35 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:45:35.546967 | orchestrator | 2025-09-16 00:45:35 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:35.547907 | orchestrator | 2025-09-16 00:45:35 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:35.547991 | orchestrator | 2025-09-16 00:45:35 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:38.585556 | orchestrator | 2025-09-16 00:45:38 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:38.589691 | orchestrator | 2025-09-16 00:45:38 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:38.591984 | orchestrator | 2025-09-16 00:45:38 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:45:38.592487 | orchestrator | 2025-09-16 00:45:38 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:38.593912 | orchestrator | 2025-09-16 00:45:38 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:38.593932 | orchestrator | 2025-09-16 00:45:38 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:41.654866 | orchestrator | 2025-09-16 00:45:41 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:41.661055 | orchestrator | 2025-09-16 00:45:41 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:41.665343 | orchestrator | 2025-09-16 00:45:41 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:45:41.669826 | orchestrator | 2025-09-16 00:45:41 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:41.670380 | orchestrator | 2025-09-16 00:45:41 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:41.670422 | orchestrator | 2025-09-16 00:45:41 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:44.723516 | orchestrator | 2025-09-16 00:45:44 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:44.723620 | orchestrator | 2025-09-16 00:45:44 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:44.726162 | orchestrator | 2025-09-16 00:45:44 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state STARTED 2025-09-16 00:45:44.727481 | orchestrator | 2025-09-16 00:45:44 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:44.730855 | orchestrator | 2025-09-16 00:45:44 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:44.730903 | orchestrator | 2025-09-16 00:45:44 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:47.766708 | orchestrator | 2025-09-16 00:45:47 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:47.767556 | orchestrator | 2025-09-16 00:45:47 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:47.770462 | orchestrator | 2025-09-16 00:45:47 | INFO  | Task baf1f0e7-de49-4c2c-9e09-dac2f78e8ab7 is in state SUCCESS 2025-09-16 00:45:47.771886 | orchestrator | 2025-09-16 00:45:47.771929 | orchestrator | 2025-09-16 00:45:47.771941 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-16 00:45:47.771953 | orchestrator | 2025-09-16 00:45:47.771965 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-16 00:45:47.771977 | orchestrator | Tuesday 16 September 2025 00:44:37 +0000 (0:00:00.504) 0:00:00.504 ***** 2025-09-16 00:45:47.771989 | orchestrator | ok: [testbed-manager] => { 2025-09-16 00:45:47.772002 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-16 00:45:47.772015 | orchestrator | } 2025-09-16 00:45:47.772027 | orchestrator | 2025-09-16 00:45:47.772038 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-16 00:45:47.772049 | orchestrator | Tuesday 16 September 2025 00:44:37 +0000 (0:00:00.116) 0:00:00.621 ***** 2025-09-16 00:45:47.772060 | orchestrator | ok: [testbed-manager] 2025-09-16 00:45:47.772071 | orchestrator | 2025-09-16 00:45:47.772103 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-16 00:45:47.772114 | orchestrator | Tuesday 16 September 2025 00:44:39 +0000 (0:00:01.315) 0:00:01.937 ***** 2025-09-16 00:45:47.772126 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-16 00:45:47.772136 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-16 00:45:47.772173 | orchestrator | 2025-09-16 00:45:47.772184 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-16 00:45:47.772195 | orchestrator | Tuesday 16 September 2025 00:44:40 +0000 (0:00:01.056) 0:00:02.993 ***** 2025-09-16 00:45:47.772206 | orchestrator | changed: [testbed-manager] 2025-09-16 00:45:47.772216 | orchestrator | 2025-09-16 00:45:47.772227 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-16 00:45:47.772238 | orchestrator | Tuesday 16 September 2025 00:44:42 +0000 (0:00:01.940) 0:00:04.934 ***** 2025-09-16 00:45:47.772249 | orchestrator | changed: [testbed-manager] 2025-09-16 00:45:47.772259 | orchestrator | 2025-09-16 00:45:47.772270 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-16 00:45:47.772281 | orchestrator | Tuesday 16 September 2025 00:44:43 +0000 (0:00:01.673) 0:00:06.608 ***** 2025-09-16 00:45:47.772292 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-16 00:45:47.772303 | orchestrator | ok: [testbed-manager] 2025-09-16 00:45:47.772314 | orchestrator | 2025-09-16 00:45:47.772324 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-16 00:45:47.772335 | orchestrator | Tuesday 16 September 2025 00:45:09 +0000 (0:00:25.796) 0:00:32.404 ***** 2025-09-16 00:45:47.772346 | orchestrator | changed: [testbed-manager] 2025-09-16 00:45:47.772357 | orchestrator | 2025-09-16 00:45:47.772367 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:45:47.772378 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:45:47.772391 | orchestrator | 2025-09-16 00:45:47.772402 | orchestrator | 2025-09-16 00:45:47.772413 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:45:47.772423 | orchestrator | Tuesday 16 September 2025 00:45:13 +0000 (0:00:03.455) 0:00:35.859 ***** 2025-09-16 00:45:47.772434 | orchestrator | =============================================================================== 2025-09-16 00:45:47.772445 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.80s 2025-09-16 00:45:47.772456 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.46s 2025-09-16 00:45:47.772469 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.94s 2025-09-16 00:45:47.772481 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.67s 2025-09-16 00:45:47.772493 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.32s 2025-09-16 00:45:47.772505 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.06s 2025-09-16 00:45:47.772516 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.12s 2025-09-16 00:45:47.772529 | orchestrator | 2025-09-16 00:45:47.772541 | orchestrator | 2025-09-16 00:45:47.772552 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-16 00:45:47.772565 | orchestrator | 2025-09-16 00:45:47.772577 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-16 00:45:47.772589 | orchestrator | Tuesday 16 September 2025 00:44:39 +0000 (0:00:00.762) 0:00:00.762 ***** 2025-09-16 00:45:47.772601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-16 00:45:47.772685 | orchestrator | 2025-09-16 00:45:47.772700 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-16 00:45:47.772713 | orchestrator | Tuesday 16 September 2025 00:44:39 +0000 (0:00:00.505) 0:00:01.268 ***** 2025-09-16 00:45:47.772733 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-16 00:45:47.772770 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-16 00:45:47.772899 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-16 00:45:47.772918 | orchestrator | 2025-09-16 00:45:47.772929 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-16 00:45:47.772940 | orchestrator | Tuesday 16 September 2025 00:44:41 +0000 (0:00:01.778) 0:00:03.047 ***** 2025-09-16 00:45:47.772951 | orchestrator | changed: [testbed-manager] 2025-09-16 00:45:47.772962 | orchestrator | 2025-09-16 00:45:47.772972 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-16 00:45:47.772983 | orchestrator | Tuesday 16 September 2025 00:44:43 +0000 (0:00:01.537) 0:00:04.585 ***** 2025-09-16 00:45:47.773008 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-16 00:45:47.773019 | orchestrator | ok: [testbed-manager] 2025-09-16 00:45:47.773030 | orchestrator | 2025-09-16 00:45:47.773041 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-16 00:45:47.773052 | orchestrator | Tuesday 16 September 2025 00:45:15 +0000 (0:00:32.131) 0:00:36.716 ***** 2025-09-16 00:45:47.773063 | orchestrator | changed: [testbed-manager] 2025-09-16 00:45:47.773074 | orchestrator | 2025-09-16 00:45:47.773085 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-16 00:45:47.773095 | orchestrator | Tuesday 16 September 2025 00:45:17 +0000 (0:00:01.707) 0:00:38.423 ***** 2025-09-16 00:45:47.773106 | orchestrator | ok: [testbed-manager] 2025-09-16 00:45:47.773117 | orchestrator | 2025-09-16 00:45:47.773128 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-16 00:45:47.773138 | orchestrator | Tuesday 16 September 2025 00:45:18 +0000 (0:00:00.996) 0:00:39.419 ***** 2025-09-16 00:45:47.773149 | orchestrator | changed: [testbed-manager] 2025-09-16 00:45:47.773160 | orchestrator | 2025-09-16 00:45:47.773171 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-16 00:45:47.773181 | orchestrator | Tuesday 16 September 2025 00:45:21 +0000 (0:00:02.887) 0:00:42.307 ***** 2025-09-16 00:45:47.773192 | orchestrator | changed: [testbed-manager] 2025-09-16 00:45:47.773203 | orchestrator | 2025-09-16 00:45:47.773214 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-16 00:45:47.773224 | orchestrator | Tuesday 16 September 2025 00:45:22 +0000 (0:00:01.657) 0:00:43.965 ***** 2025-09-16 00:45:47.773235 | orchestrator | changed: [testbed-manager] 2025-09-16 00:45:47.773246 | orchestrator | 2025-09-16 00:45:47.773256 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-16 00:45:47.773267 | orchestrator | Tuesday 16 September 2025 00:45:23 +0000 (0:00:00.553) 0:00:44.519 ***** 2025-09-16 00:45:47.773278 | orchestrator | ok: [testbed-manager] 2025-09-16 00:45:47.773289 | orchestrator | 2025-09-16 00:45:47.773300 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:45:47.773310 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:45:47.773321 | orchestrator | 2025-09-16 00:45:47.773332 | orchestrator | 2025-09-16 00:45:47.773343 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:45:47.773354 | orchestrator | Tuesday 16 September 2025 00:45:23 +0000 (0:00:00.467) 0:00:44.986 ***** 2025-09-16 00:45:47.773364 | orchestrator | =============================================================================== 2025-09-16 00:45:47.773375 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.13s 2025-09-16 00:45:47.773386 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.89s 2025-09-16 00:45:47.773396 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.78s 2025-09-16 00:45:47.773407 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.71s 2025-09-16 00:45:47.773426 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.66s 2025-09-16 00:45:47.773437 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.54s 2025-09-16 00:45:47.773448 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.00s 2025-09-16 00:45:47.773458 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.55s 2025-09-16 00:45:47.773469 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.51s 2025-09-16 00:45:47.773480 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.47s 2025-09-16 00:45:47.773491 | orchestrator | 2025-09-16 00:45:47.773501 | orchestrator | 2025-09-16 00:45:47.773512 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 00:45:47.773523 | orchestrator | 2025-09-16 00:45:47.773535 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 00:45:47.773548 | orchestrator | Tuesday 16 September 2025 00:44:38 +0000 (0:00:00.688) 0:00:00.688 ***** 2025-09-16 00:45:47.773561 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-16 00:45:47.773573 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-16 00:45:47.773589 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-16 00:45:47.773601 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-16 00:45:47.773614 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-16 00:45:47.773626 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-16 00:45:47.773638 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-16 00:45:47.773650 | orchestrator | 2025-09-16 00:45:47.773662 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-16 00:45:47.773674 | orchestrator | 2025-09-16 00:45:47.773687 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-16 00:45:47.773698 | orchestrator | Tuesday 16 September 2025 00:44:40 +0000 (0:00:02.249) 0:00:02.937 ***** 2025-09-16 00:45:47.773723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:45:47.773781 | orchestrator | 2025-09-16 00:45:47.773795 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-16 00:45:47.773807 | orchestrator | Tuesday 16 September 2025 00:44:42 +0000 (0:00:01.474) 0:00:04.412 ***** 2025-09-16 00:45:47.773819 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:45:47.773832 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:45:47.773844 | orchestrator | ok: [testbed-manager] 2025-09-16 00:45:47.773856 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:45:47.773868 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:45:47.773886 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:45:47.773898 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:45:47.773908 | orchestrator | 2025-09-16 00:45:47.773919 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-16 00:45:47.773930 | orchestrator | Tuesday 16 September 2025 00:44:43 +0000 (0:00:01.423) 0:00:05.836 ***** 2025-09-16 00:45:47.773941 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:45:47.773952 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:45:47.773963 | orchestrator | ok: [testbed-manager] 2025-09-16 00:45:47.773973 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:45:47.773984 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:45:47.773995 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:45:47.774005 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:45:47.774076 | orchestrator | 2025-09-16 00:45:47.774091 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-16 00:45:47.774102 | orchestrator | Tuesday 16 September 2025 00:44:46 +0000 (0:00:03.420) 0:00:09.257 ***** 2025-09-16 00:45:47.774121 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:45:47.774132 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:45:47.774143 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:45:47.774154 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:45:47.774165 | orchestrator | changed: [testbed-manager] 2025-09-16 00:45:47.774176 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:45:47.774186 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:45:47.774197 | orchestrator | 2025-09-16 00:45:47.774208 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-16 00:45:47.774219 | orchestrator | Tuesday 16 September 2025 00:44:49 +0000 (0:00:02.386) 0:00:11.644 ***** 2025-09-16 00:45:47.774230 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:45:47.774241 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:45:47.774252 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:45:47.774263 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:45:47.774273 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:45:47.774284 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:45:47.774294 | orchestrator | changed: [testbed-manager] 2025-09-16 00:45:47.774305 | orchestrator | 2025-09-16 00:45:47.774316 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-16 00:45:47.774327 | orchestrator | Tuesday 16 September 2025 00:45:01 +0000 (0:00:12.130) 0:00:23.774 ***** 2025-09-16 00:45:47.774338 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:45:47.774349 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:45:47.774360 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:45:47.774370 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:45:47.774381 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:45:47.774392 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:45:47.774402 | orchestrator | changed: [testbed-manager] 2025-09-16 00:45:47.774413 | orchestrator | 2025-09-16 00:45:47.774424 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-16 00:45:47.774435 | orchestrator | Tuesday 16 September 2025 00:45:25 +0000 (0:00:23.792) 0:00:47.567 ***** 2025-09-16 00:45:47.774447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:45:47.774459 | orchestrator | 2025-09-16 00:45:47.774470 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-16 00:45:47.774481 | orchestrator | Tuesday 16 September 2025 00:45:26 +0000 (0:00:01.296) 0:00:48.864 ***** 2025-09-16 00:45:47.774492 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-16 00:45:47.774503 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-16 00:45:47.774514 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-16 00:45:47.774525 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-16 00:45:47.774536 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-16 00:45:47.774547 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-16 00:45:47.774557 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-16 00:45:47.774568 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-16 00:45:47.774579 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-16 00:45:47.774590 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-16 00:45:47.774600 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-16 00:45:47.774616 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-16 00:45:47.774627 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-16 00:45:47.774638 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-16 00:45:47.774649 | orchestrator | 2025-09-16 00:45:47.774660 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-16 00:45:47.774672 | orchestrator | Tuesday 16 September 2025 00:45:31 +0000 (0:00:04.854) 0:00:53.718 ***** 2025-09-16 00:45:47.774688 | orchestrator | ok: [testbed-manager] 2025-09-16 00:45:47.774700 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:45:47.774710 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:45:47.774721 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:45:47.774732 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:45:47.774796 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:45:47.774807 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:45:47.774818 | orchestrator | 2025-09-16 00:45:47.774829 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-16 00:45:47.774840 | orchestrator | Tuesday 16 September 2025 00:45:32 +0000 (0:00:00.993) 0:00:54.711 ***** 2025-09-16 00:45:47.774851 | orchestrator | changed: [testbed-manager] 2025-09-16 00:45:47.774861 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:45:47.774873 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:45:47.774883 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:45:47.774894 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:45:47.774905 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:45:47.774916 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:45:47.774926 | orchestrator | 2025-09-16 00:45:47.774937 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-16 00:45:47.774956 | orchestrator | Tuesday 16 September 2025 00:45:33 +0000 (0:00:01.248) 0:00:55.959 ***** 2025-09-16 00:45:47.774967 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:45:47.774977 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:45:47.774988 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:45:47.774999 | orchestrator | ok: [testbed-manager] 2025-09-16 00:45:47.775009 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:45:47.775020 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:45:47.775031 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:45:47.775041 | orchestrator | 2025-09-16 00:45:47.775052 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-16 00:45:47.775063 | orchestrator | Tuesday 16 September 2025 00:45:35 +0000 (0:00:01.570) 0:00:57.530 ***** 2025-09-16 00:45:47.775073 | orchestrator | ok: [testbed-manager] 2025-09-16 00:45:47.775084 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:45:47.775095 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:45:47.775105 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:45:47.775116 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:45:47.775127 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:45:47.775137 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:45:47.775148 | orchestrator | 2025-09-16 00:45:47.775159 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-16 00:45:47.775170 | orchestrator | Tuesday 16 September 2025 00:45:37 +0000 (0:00:02.358) 0:00:59.889 ***** 2025-09-16 00:45:47.775181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-16 00:45:47.775193 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:45:47.775204 | orchestrator | 2025-09-16 00:45:47.775215 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-16 00:45:47.775225 | orchestrator | Tuesday 16 September 2025 00:45:39 +0000 (0:00:01.663) 0:01:01.552 ***** 2025-09-16 00:45:47.775235 | orchestrator | changed: [testbed-manager] 2025-09-16 00:45:47.775245 | orchestrator | 2025-09-16 00:45:47.775254 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-16 00:45:47.775264 | orchestrator | Tuesday 16 September 2025 00:45:41 +0000 (0:00:02.095) 0:01:03.648 ***** 2025-09-16 00:45:47.775273 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:45:47.775283 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:45:47.775292 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:45:47.775302 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:45:47.775312 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:45:47.775328 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:45:47.775337 | orchestrator | changed: [testbed-manager] 2025-09-16 00:45:47.775347 | orchestrator | 2025-09-16 00:45:47.775357 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:45:47.775366 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:45:47.775376 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:45:47.775386 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:45:47.775396 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:45:47.775405 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:45:47.775415 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:45:47.775425 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:45:47.775434 | orchestrator | 2025-09-16 00:45:47.775444 | orchestrator | 2025-09-16 00:45:47.775453 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:45:47.775471 | orchestrator | Tuesday 16 September 2025 00:45:44 +0000 (0:00:03.625) 0:01:07.274 ***** 2025-09-16 00:45:47.775481 | orchestrator | =============================================================================== 2025-09-16 00:45:47.775491 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 23.79s 2025-09-16 00:45:47.775500 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.13s 2025-09-16 00:45:47.775510 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.85s 2025-09-16 00:45:47.775519 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.63s 2025-09-16 00:45:47.775529 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.42s 2025-09-16 00:45:47.775538 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.39s 2025-09-16 00:45:47.775548 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.36s 2025-09-16 00:45:47.775557 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.25s 2025-09-16 00:45:47.775567 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.10s 2025-09-16 00:45:47.775576 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.66s 2025-09-16 00:45:47.775586 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.57s 2025-09-16 00:45:47.775600 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.47s 2025-09-16 00:45:47.775610 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.42s 2025-09-16 00:45:47.775619 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.30s 2025-09-16 00:45:47.775629 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.25s 2025-09-16 00:45:47.775639 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 0.99s 2025-09-16 00:45:47.775649 | orchestrator | 2025-09-16 00:45:47 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:47.775659 | orchestrator | 2025-09-16 00:45:47 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:47.775669 | orchestrator | 2025-09-16 00:45:47 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:50.804022 | orchestrator | 2025-09-16 00:45:50 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:50.805351 | orchestrator | 2025-09-16 00:45:50 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state STARTED 2025-09-16 00:45:50.805380 | orchestrator | 2025-09-16 00:45:50 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:50.806951 | orchestrator | 2025-09-16 00:45:50 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:50.806979 | orchestrator | 2025-09-16 00:45:50 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:53.846506 | orchestrator | 2025-09-16 00:45:53 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:53.847100 | orchestrator | 2025-09-16 00:45:53 | INFO  | Task ce9228ad-6c55-4075-bc5b-5fac8323caf4 is in state SUCCESS 2025-09-16 00:45:53.849168 | orchestrator | 2025-09-16 00:45:53 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:53.850546 | orchestrator | 2025-09-16 00:45:53 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:53.851587 | orchestrator | 2025-09-16 00:45:53 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:56.892527 | orchestrator | 2025-09-16 00:45:56 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:56.893331 | orchestrator | 2025-09-16 00:45:56 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:56.896952 | orchestrator | 2025-09-16 00:45:56 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:56.896988 | orchestrator | 2025-09-16 00:45:56 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:45:59.944441 | orchestrator | 2025-09-16 00:45:59 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:45:59.944927 | orchestrator | 2025-09-16 00:45:59 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:45:59.948077 | orchestrator | 2025-09-16 00:45:59 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:45:59.948110 | orchestrator | 2025-09-16 00:45:59 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:02.993980 | orchestrator | 2025-09-16 00:46:02 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:02.995974 | orchestrator | 2025-09-16 00:46:02 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:02.997989 | orchestrator | 2025-09-16 00:46:02 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:02.998249 | orchestrator | 2025-09-16 00:46:02 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:06.037422 | orchestrator | 2025-09-16 00:46:06 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:06.039488 | orchestrator | 2025-09-16 00:46:06 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:06.040872 | orchestrator | 2025-09-16 00:46:06 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:06.040896 | orchestrator | 2025-09-16 00:46:06 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:09.097643 | orchestrator | 2025-09-16 00:46:09 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:09.099232 | orchestrator | 2025-09-16 00:46:09 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:09.101487 | orchestrator | 2025-09-16 00:46:09 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:09.101530 | orchestrator | 2025-09-16 00:46:09 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:12.138011 | orchestrator | 2025-09-16 00:46:12 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:12.139158 | orchestrator | 2025-09-16 00:46:12 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:12.140452 | orchestrator | 2025-09-16 00:46:12 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:12.140473 | orchestrator | 2025-09-16 00:46:12 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:15.181426 | orchestrator | 2025-09-16 00:46:15 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:15.184044 | orchestrator | 2025-09-16 00:46:15 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:15.184992 | orchestrator | 2025-09-16 00:46:15 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:15.185507 | orchestrator | 2025-09-16 00:46:15 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:18.224332 | orchestrator | 2025-09-16 00:46:18 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:18.224831 | orchestrator | 2025-09-16 00:46:18 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:18.225717 | orchestrator | 2025-09-16 00:46:18 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:18.225852 | orchestrator | 2025-09-16 00:46:18 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:21.280997 | orchestrator | 2025-09-16 00:46:21 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:21.284396 | orchestrator | 2025-09-16 00:46:21 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:21.287955 | orchestrator | 2025-09-16 00:46:21 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:21.287990 | orchestrator | 2025-09-16 00:46:21 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:24.329902 | orchestrator | 2025-09-16 00:46:24 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:24.332637 | orchestrator | 2025-09-16 00:46:24 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:24.334125 | orchestrator | 2025-09-16 00:46:24 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:24.334150 | orchestrator | 2025-09-16 00:46:24 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:27.373584 | orchestrator | 2025-09-16 00:46:27 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:27.375026 | orchestrator | 2025-09-16 00:46:27 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:27.378559 | orchestrator | 2025-09-16 00:46:27 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:27.378585 | orchestrator | 2025-09-16 00:46:27 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:30.428697 | orchestrator | 2025-09-16 00:46:30 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:30.428921 | orchestrator | 2025-09-16 00:46:30 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:30.429704 | orchestrator | 2025-09-16 00:46:30 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:30.429784 | orchestrator | 2025-09-16 00:46:30 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:33.468893 | orchestrator | 2025-09-16 00:46:33 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:33.469313 | orchestrator | 2025-09-16 00:46:33 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:33.470501 | orchestrator | 2025-09-16 00:46:33 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:33.470538 | orchestrator | 2025-09-16 00:46:33 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:36.508013 | orchestrator | 2025-09-16 00:46:36 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:36.510515 | orchestrator | 2025-09-16 00:46:36 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:36.512075 | orchestrator | 2025-09-16 00:46:36 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:36.512099 | orchestrator | 2025-09-16 00:46:36 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:39.544852 | orchestrator | 2025-09-16 00:46:39 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:39.545901 | orchestrator | 2025-09-16 00:46:39 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:39.547230 | orchestrator | 2025-09-16 00:46:39 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:39.547256 | orchestrator | 2025-09-16 00:46:39 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:42.590583 | orchestrator | 2025-09-16 00:46:42 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:42.591384 | orchestrator | 2025-09-16 00:46:42 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:42.592391 | orchestrator | 2025-09-16 00:46:42 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:42.592606 | orchestrator | 2025-09-16 00:46:42 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:45.627861 | orchestrator | 2025-09-16 00:46:45 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:45.630441 | orchestrator | 2025-09-16 00:46:45 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:45.632283 | orchestrator | 2025-09-16 00:46:45 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:45.632308 | orchestrator | 2025-09-16 00:46:45 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:48.667350 | orchestrator | 2025-09-16 00:46:48 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:48.668415 | orchestrator | 2025-09-16 00:46:48 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:48.668446 | orchestrator | 2025-09-16 00:46:48 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:48.668459 | orchestrator | 2025-09-16 00:46:48 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:51.708043 | orchestrator | 2025-09-16 00:46:51 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:51.708148 | orchestrator | 2025-09-16 00:46:51 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:51.709579 | orchestrator | 2025-09-16 00:46:51 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:51.709610 | orchestrator | 2025-09-16 00:46:51 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:54.737844 | orchestrator | 2025-09-16 00:46:54 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:54.738170 | orchestrator | 2025-09-16 00:46:54 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:54.739305 | orchestrator | 2025-09-16 00:46:54 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:54.739399 | orchestrator | 2025-09-16 00:46:54 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:46:57.777975 | orchestrator | 2025-09-16 00:46:57 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:46:57.781350 | orchestrator | 2025-09-16 00:46:57 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:46:57.785442 | orchestrator | 2025-09-16 00:46:57 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state STARTED 2025-09-16 00:46:57.786180 | orchestrator | 2025-09-16 00:46:57 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:00.832698 | orchestrator | 2025-09-16 00:47:00 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:00.832936 | orchestrator | 2025-09-16 00:47:00 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:00.838302 | orchestrator | 2025-09-16 00:47:00 | INFO  | Task 6811dfb5-b9ec-46b2-a6c5-708448e43fce is in state SUCCESS 2025-09-16 00:47:00.841083 | orchestrator | 2025-09-16 00:47:00.841123 | orchestrator | 2025-09-16 00:47:00.841135 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-16 00:47:00.841146 | orchestrator | 2025-09-16 00:47:00.841156 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-16 00:47:00.841166 | orchestrator | Tuesday 16 September 2025 00:44:56 +0000 (0:00:00.323) 0:00:00.323 ***** 2025-09-16 00:47:00.841176 | orchestrator | ok: [testbed-manager] 2025-09-16 00:47:00.841188 | orchestrator | 2025-09-16 00:47:00.841198 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-16 00:47:00.841208 | orchestrator | Tuesday 16 September 2025 00:44:57 +0000 (0:00:00.853) 0:00:01.177 ***** 2025-09-16 00:47:00.841218 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-16 00:47:00.841228 | orchestrator | 2025-09-16 00:47:00.841237 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-16 00:47:00.841247 | orchestrator | Tuesday 16 September 2025 00:44:58 +0000 (0:00:00.571) 0:00:01.749 ***** 2025-09-16 00:47:00.841257 | orchestrator | changed: [testbed-manager] 2025-09-16 00:47:00.841267 | orchestrator | 2025-09-16 00:47:00.841276 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-16 00:47:00.841286 | orchestrator | Tuesday 16 September 2025 00:44:59 +0000 (0:00:01.166) 0:00:02.915 ***** 2025-09-16 00:47:00.841296 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-16 00:47:00.841314 | orchestrator | ok: [testbed-manager] 2025-09-16 00:47:00.841325 | orchestrator | 2025-09-16 00:47:00.841335 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-16 00:47:00.841344 | orchestrator | Tuesday 16 September 2025 00:45:42 +0000 (0:00:43.386) 0:00:46.301 ***** 2025-09-16 00:47:00.841354 | orchestrator | changed: [testbed-manager] 2025-09-16 00:47:00.841364 | orchestrator | 2025-09-16 00:47:00.841374 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:47:00.841384 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:47:00.841395 | orchestrator | 2025-09-16 00:47:00.841405 | orchestrator | 2025-09-16 00:47:00.841414 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:47:00.841424 | orchestrator | Tuesday 16 September 2025 00:45:52 +0000 (0:00:09.759) 0:00:56.061 ***** 2025-09-16 00:47:00.841451 | orchestrator | =============================================================================== 2025-09-16 00:47:00.841462 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 43.39s 2025-09-16 00:47:00.841471 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 9.76s 2025-09-16 00:47:00.841481 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.17s 2025-09-16 00:47:00.841490 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.85s 2025-09-16 00:47:00.841500 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.57s 2025-09-16 00:47:00.841509 | orchestrator | 2025-09-16 00:47:00.841519 | orchestrator | 2025-09-16 00:47:00.841529 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-16 00:47:00.841538 | orchestrator | 2025-09-16 00:47:00.841548 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-16 00:47:00.841558 | orchestrator | Tuesday 16 September 2025 00:44:30 +0000 (0:00:00.298) 0:00:00.298 ***** 2025-09-16 00:47:00.841567 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:47:00.841578 | orchestrator | 2025-09-16 00:47:00.841588 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-16 00:47:00.841597 | orchestrator | Tuesday 16 September 2025 00:44:31 +0000 (0:00:01.404) 0:00:01.702 ***** 2025-09-16 00:47:00.841607 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-16 00:47:00.841616 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-16 00:47:00.841626 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-16 00:47:00.841636 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-16 00:47:00.841645 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-16 00:47:00.841656 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-16 00:47:00.841668 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-16 00:47:00.841680 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-16 00:47:00.841692 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-16 00:47:00.841707 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-16 00:47:00.841719 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-16 00:47:00.841752 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-16 00:47:00.841764 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-16 00:47:00.841775 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-16 00:47:00.841786 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-16 00:47:00.841797 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-16 00:47:00.841822 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-16 00:47:00.841833 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-16 00:47:00.841845 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-16 00:47:00.841856 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-16 00:47:00.841867 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-16 00:47:00.841878 | orchestrator | 2025-09-16 00:47:00.841896 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-16 00:47:00.841907 | orchestrator | Tuesday 16 September 2025 00:44:35 +0000 (0:00:04.114) 0:00:05.817 ***** 2025-09-16 00:47:00.841919 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:47:00.841932 | orchestrator | 2025-09-16 00:47:00.841943 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-16 00:47:00.841954 | orchestrator | Tuesday 16 September 2025 00:44:36 +0000 (0:00:01.218) 0:00:07.035 ***** 2025-09-16 00:47:00.841970 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.841986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.841999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.842011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.842091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.842110 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.842121 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.842139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.842150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.842160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.842171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.842185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.842197 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.842233 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.842251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.842262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.842272 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.842282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.842292 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.842303 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.842317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.842327 | orchestrator | 2025-09-16 00:47:00.842337 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-16 00:47:00.842359 | orchestrator | Tuesday 16 September 2025 00:44:41 +0000 (0:00:04.963) 0:00:11.998 ***** 2025-09-16 00:47:00.842392 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-16 00:47:00.842404 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842414 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842424 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:47:00.842434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-16 00:47:00.842445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842465 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:47:00.842475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-16 00:47:00.842495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842528 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:47:00.842538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-16 00:47:00.842548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842569 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:47:00.842579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-16 00:47:00.842589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-16 00:47:00.842603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842630 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:47:00.842645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842666 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:47:00.842676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-16 00:47:00.842687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842707 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:47:00.842717 | orchestrator | 2025-09-16 00:47:00.842745 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-16 00:47:00.842756 | orchestrator | Tuesday 16 September 2025 00:44:43 +0000 (0:00:01.475) 0:00:13.473 ***** 2025-09-16 00:47:00.842771 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-16 00:47:00.842782 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842798 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842808 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:47:00.842818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-16 00:47:00.842833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-16 00:47:00.842865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842891 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:47:00.842905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-16 00:47:00.842920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-16 00:47:00.842952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.842977 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:47:00.842987 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:47:00.842997 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:47:00.843007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-16 00:47:00.843022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.843036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.843046 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:47:00.843057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-16 00:47:00.843067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.843077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.843087 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:47:00.843097 | orchestrator | 2025-09-16 00:47:00.843107 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-16 00:47:00.843117 | orchestrator | Tuesday 16 September 2025 00:44:45 +0000 (0:00:01.983) 0:00:15.457 ***** 2025-09-16 00:47:00.843127 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:47:00.843141 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:47:00.843150 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:47:00.843160 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:47:00.843170 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:47:00.843179 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:47:00.843188 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:47:00.843198 | orchestrator | 2025-09-16 00:47:00.843208 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-16 00:47:00.843217 | orchestrator | Tuesday 16 September 2025 00:44:46 +0000 (0:00:00.940) 0:00:16.398 ***** 2025-09-16 00:47:00.843227 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:47:00.843237 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:47:00.843246 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:47:00.843256 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:47:00.843265 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:47:00.843275 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:47:00.843284 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:47:00.843294 | orchestrator | 2025-09-16 00:47:00.843303 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-16 00:47:00.843313 | orchestrator | Tuesday 16 September 2025 00:44:47 +0000 (0:00:01.611) 0:00:18.009 ***** 2025-09-16 00:47:00.843323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.843337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.843354 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.843364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.843374 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.843390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.843400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.843411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.843425 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.843440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.843450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.843460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.843479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.843490 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.843500 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.843511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.843525 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.843540 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.843551 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.843561 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.843576 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.843587 | orchestrator | 2025-09-16 00:47:00.843597 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-16 00:47:00.843606 | orchestrator | Tuesday 16 September 2025 00:44:56 +0000 (0:00:08.396) 0:00:26.406 ***** 2025-09-16 00:47:00.843616 | orchestrator | [WARNING]: Skipped 2025-09-16 00:47:00.843627 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-16 00:47:00.843637 | orchestrator | to this access issue: 2025-09-16 00:47:00.843647 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-16 00:47:00.843657 | orchestrator | directory 2025-09-16 00:47:00.843667 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-16 00:47:00.843677 | orchestrator | 2025-09-16 00:47:00.843687 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-16 00:47:00.843696 | orchestrator | Tuesday 16 September 2025 00:44:57 +0000 (0:00:01.214) 0:00:27.620 ***** 2025-09-16 00:47:00.843706 | orchestrator | [WARNING]: Skipped 2025-09-16 00:47:00.843716 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-16 00:47:00.843754 | orchestrator | to this access issue: 2025-09-16 00:47:00.843765 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-16 00:47:00.843775 | orchestrator | directory 2025-09-16 00:47:00.843785 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-16 00:47:00.843795 | orchestrator | 2025-09-16 00:47:00.843805 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-16 00:47:00.843815 | orchestrator | Tuesday 16 September 2025 00:44:58 +0000 (0:00:01.161) 0:00:28.782 ***** 2025-09-16 00:47:00.843825 | orchestrator | [WARNING]: Skipped 2025-09-16 00:47:00.843835 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-16 00:47:00.843844 | orchestrator | to this access issue: 2025-09-16 00:47:00.843854 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-16 00:47:00.843864 | orchestrator | directory 2025-09-16 00:47:00.843874 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-16 00:47:00.843884 | orchestrator | 2025-09-16 00:47:00.843894 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-16 00:47:00.843903 | orchestrator | Tuesday 16 September 2025 00:44:59 +0000 (0:00:01.093) 0:00:29.875 ***** 2025-09-16 00:47:00.843913 | orchestrator | [WARNING]: Skipped 2025-09-16 00:47:00.843923 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-16 00:47:00.843933 | orchestrator | to this access issue: 2025-09-16 00:47:00.843943 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-16 00:47:00.843952 | orchestrator | directory 2025-09-16 00:47:00.843962 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-16 00:47:00.843972 | orchestrator | 2025-09-16 00:47:00.843981 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-16 00:47:00.843991 | orchestrator | Tuesday 16 September 2025 00:45:00 +0000 (0:00:00.901) 0:00:30.776 ***** 2025-09-16 00:47:00.844001 | orchestrator | changed: [testbed-manager] 2025-09-16 00:47:00.844011 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:47:00.844021 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:47:00.844030 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:47:00.844044 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:47:00.844061 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:47:00.844071 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:47:00.844081 | orchestrator | 2025-09-16 00:47:00.844091 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-16 00:47:00.844100 | orchestrator | Tuesday 16 September 2025 00:45:05 +0000 (0:00:05.285) 0:00:36.062 ***** 2025-09-16 00:47:00.844110 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-16 00:47:00.844121 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-16 00:47:00.844130 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-16 00:47:00.844145 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-16 00:47:00.844155 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-16 00:47:00.844165 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-16 00:47:00.844174 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-16 00:47:00.844184 | orchestrator | 2025-09-16 00:47:00.844194 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-16 00:47:00.844204 | orchestrator | Tuesday 16 September 2025 00:45:10 +0000 (0:00:04.375) 0:00:40.438 ***** 2025-09-16 00:47:00.844213 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:47:00.844223 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:47:00.844232 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:47:00.844242 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:47:00.844251 | orchestrator | changed: [testbed-manager] 2025-09-16 00:47:00.844261 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:47:00.844270 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:47:00.844280 | orchestrator | 2025-09-16 00:47:00.844289 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-16 00:47:00.844299 | orchestrator | Tuesday 16 September 2025 00:45:14 +0000 (0:00:04.110) 0:00:44.548 ***** 2025-09-16 00:47:00.844309 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.844320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.844331 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.844345 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.844364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.844381 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.844391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.844401 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.844412 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.844422 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.844433 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.844448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.844458 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.844473 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.844484 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.844498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.844508 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.844518 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.844529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:47:00.844545 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.844560 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.844570 | orchestrator | 2025-09-16 00:47:00.844580 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-16 00:47:00.844590 | orchestrator | Tuesday 16 September 2025 00:45:16 +0000 (0:00:02.564) 0:00:47.113 ***** 2025-09-16 00:47:00.844599 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-16 00:47:00.844609 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-16 00:47:00.844619 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-16 00:47:00.844637 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-16 00:47:00.844647 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-16 00:47:00.844657 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-16 00:47:00.844667 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-16 00:47:00.844676 | orchestrator | 2025-09-16 00:47:00.844686 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-16 00:47:00.844696 | orchestrator | Tuesday 16 September 2025 00:45:21 +0000 (0:00:04.257) 0:00:51.371 ***** 2025-09-16 00:47:00.844705 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-16 00:47:00.844715 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-16 00:47:00.844739 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-16 00:47:00.844749 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-16 00:47:00.844758 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-16 00:47:00.844768 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-16 00:47:00.844778 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-16 00:47:00.844787 | orchestrator | 2025-09-16 00:47:00.844797 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-16 00:47:00.844806 | orchestrator | Tuesday 16 September 2025 00:45:23 +0000 (0:00:02.606) 0:00:53.977 ***** 2025-09-16 00:47:00.844816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.844833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.844844 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.844854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.844869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.844885 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.844896 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-16 00:47:00.844906 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.844922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.844932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.844942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.844956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.844980 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.844991 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.845001 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.845020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.845030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.845040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.845051 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.845065 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.845076 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:47:00.845085 | orchestrator | 2025-09-16 00:47:00.845099 | orchestrator | TASK [common : Creating log volume] ************************2025-09-16 00:47:00 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:00.845151 | orchestrator | ******************** 2025-09-16 00:47:00.845163 | orchestrator | Tuesday 16 September 2025 00:45:26 +0000 (0:00:02.984) 0:00:56.962 ***** 2025-09-16 00:47:00.845173 | orchestrator | changed: [testbed-manager] 2025-09-16 00:47:00.845182 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:47:00.845192 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:47:00.845201 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:47:00.845211 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:47:00.845220 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:47:00.845230 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:47:00.845239 | orchestrator | 2025-09-16 00:47:00.845249 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-16 00:47:00.845266 | orchestrator | Tuesday 16 September 2025 00:45:29 +0000 (0:00:02.418) 0:00:59.380 ***** 2025-09-16 00:47:00.845276 | orchestrator | changed: [testbed-manager] 2025-09-16 00:47:00.845285 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:47:00.845294 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:47:00.845304 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:47:00.845313 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:47:00.845323 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:47:00.845332 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:47:00.845342 | orchestrator | 2025-09-16 00:47:00.845351 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-16 00:47:00.845361 | orchestrator | Tuesday 16 September 2025 00:45:30 +0000 (0:00:01.686) 0:01:01.067 ***** 2025-09-16 00:47:00.845371 | orchestrator | 2025-09-16 00:47:00.845380 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-16 00:47:00.845390 | orchestrator | Tuesday 16 September 2025 00:45:30 +0000 (0:00:00.064) 0:01:01.131 ***** 2025-09-16 00:47:00.845399 | orchestrator | 2025-09-16 00:47:00.845409 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-16 00:47:00.845419 | orchestrator | Tuesday 16 September 2025 00:45:30 +0000 (0:00:00.059) 0:01:01.190 ***** 2025-09-16 00:47:00.845428 | orchestrator | 2025-09-16 00:47:00.845438 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-16 00:47:00.845447 | orchestrator | Tuesday 16 September 2025 00:45:31 +0000 (0:00:00.066) 0:01:01.257 ***** 2025-09-16 00:47:00.845457 | orchestrator | 2025-09-16 00:47:00.845467 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-16 00:47:00.845476 | orchestrator | Tuesday 16 September 2025 00:45:31 +0000 (0:00:00.160) 0:01:01.417 ***** 2025-09-16 00:47:00.845485 | orchestrator | 2025-09-16 00:47:00.845495 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-16 00:47:00.845504 | orchestrator | Tuesday 16 September 2025 00:45:31 +0000 (0:00:00.060) 0:01:01.478 ***** 2025-09-16 00:47:00.845514 | orchestrator | 2025-09-16 00:47:00.845523 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-16 00:47:00.845533 | orchestrator | Tuesday 16 September 2025 00:45:31 +0000 (0:00:00.068) 0:01:01.546 ***** 2025-09-16 00:47:00.845542 | orchestrator | 2025-09-16 00:47:00.845552 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-16 00:47:00.845561 | orchestrator | Tuesday 16 September 2025 00:45:31 +0000 (0:00:00.084) 0:01:01.630 ***** 2025-09-16 00:47:00.845571 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:47:00.845580 | orchestrator | changed: [testbed-manager] 2025-09-16 00:47:00.845590 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:47:00.845599 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:47:00.845609 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:47:00.845618 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:47:00.845628 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:47:00.845637 | orchestrator | 2025-09-16 00:47:00.845647 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-16 00:47:00.845656 | orchestrator | Tuesday 16 September 2025 00:46:09 +0000 (0:00:37.742) 0:01:39.372 ***** 2025-09-16 00:47:00.845666 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:47:00.845675 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:47:00.845685 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:47:00.845694 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:47:00.845703 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:47:00.845713 | orchestrator | changed: [testbed-manager] 2025-09-16 00:47:00.845722 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:47:00.845780 | orchestrator | 2025-09-16 00:47:00.845790 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-16 00:47:00.845800 | orchestrator | Tuesday 16 September 2025 00:46:48 +0000 (0:00:39.740) 0:02:19.113 ***** 2025-09-16 00:47:00.845809 | orchestrator | ok: [testbed-manager] 2025-09-16 00:47:00.845819 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:47:00.845835 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:47:00.845845 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:47:00.845854 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:47:00.845864 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:47:00.845874 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:47:00.845883 | orchestrator | 2025-09-16 00:47:00.845893 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-16 00:47:00.845903 | orchestrator | Tuesday 16 September 2025 00:46:50 +0000 (0:00:01.928) 0:02:21.042 ***** 2025-09-16 00:47:00.845911 | orchestrator | changed: [testbed-manager] 2025-09-16 00:47:00.845919 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:47:00.845930 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:47:00.845938 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:47:00.845946 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:47:00.845954 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:47:00.845962 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:47:00.845970 | orchestrator | 2025-09-16 00:47:00.845977 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:47:00.845987 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-16 00:47:00.845995 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-16 00:47:00.846008 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-16 00:47:00.846040 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-16 00:47:00.846050 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-16 00:47:00.846058 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-16 00:47:00.846066 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-16 00:47:00.846074 | orchestrator | 2025-09-16 00:47:00.846082 | orchestrator | 2025-09-16 00:47:00.846090 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:47:00.846098 | orchestrator | Tuesday 16 September 2025 00:47:00 +0000 (0:00:09.431) 0:02:30.474 ***** 2025-09-16 00:47:00.846106 | orchestrator | =============================================================================== 2025-09-16 00:47:00.846113 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 39.74s 2025-09-16 00:47:00.846121 | orchestrator | common : Restart fluentd container ------------------------------------- 37.74s 2025-09-16 00:47:00.846129 | orchestrator | common : Restart cron container ----------------------------------------- 9.43s 2025-09-16 00:47:00.846137 | orchestrator | common : Copying over config.json files for services -------------------- 8.40s 2025-09-16 00:47:00.846145 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.29s 2025-09-16 00:47:00.846153 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.96s 2025-09-16 00:47:00.846161 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.38s 2025-09-16 00:47:00.846168 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.26s 2025-09-16 00:47:00.846176 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.11s 2025-09-16 00:47:00.846184 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.11s 2025-09-16 00:47:00.846192 | orchestrator | common : Check common containers ---------------------------------------- 2.98s 2025-09-16 00:47:00.846205 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.60s 2025-09-16 00:47:00.846213 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.56s 2025-09-16 00:47:00.846221 | orchestrator | common : Creating log volume -------------------------------------------- 2.42s 2025-09-16 00:47:00.846229 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.98s 2025-09-16 00:47:00.846237 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.93s 2025-09-16 00:47:00.846245 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.69s 2025-09-16 00:47:00.846253 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.61s 2025-09-16 00:47:00.846260 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.48s 2025-09-16 00:47:00.846268 | orchestrator | common : include_tasks -------------------------------------------------- 1.40s 2025-09-16 00:47:03.882160 | orchestrator | 2025-09-16 00:47:03 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:03.883231 | orchestrator | 2025-09-16 00:47:03 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:03.884073 | orchestrator | 2025-09-16 00:47:03 | INFO  | Task ad1b5405-5d3a-4605-8724-0b1539b5e8d3 is in state STARTED 2025-09-16 00:47:03.884527 | orchestrator | 2025-09-16 00:47:03 | INFO  | Task a612fdce-8132-436f-966b-d0a4e0eb1200 is in state STARTED 2025-09-16 00:47:03.885316 | orchestrator | 2025-09-16 00:47:03 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:03.886874 | orchestrator | 2025-09-16 00:47:03 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:03.886904 | orchestrator | 2025-09-16 00:47:03 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:06.908861 | orchestrator | 2025-09-16 00:47:06 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:06.909075 | orchestrator | 2025-09-16 00:47:06 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:06.911131 | orchestrator | 2025-09-16 00:47:06 | INFO  | Task ad1b5405-5d3a-4605-8724-0b1539b5e8d3 is in state STARTED 2025-09-16 00:47:06.912685 | orchestrator | 2025-09-16 00:47:06 | INFO  | Task a612fdce-8132-436f-966b-d0a4e0eb1200 is in state STARTED 2025-09-16 00:47:06.913375 | orchestrator | 2025-09-16 00:47:06 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:06.914883 | orchestrator | 2025-09-16 00:47:06 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:06.914906 | orchestrator | 2025-09-16 00:47:06 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:09.939027 | orchestrator | 2025-09-16 00:47:09 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:09.939140 | orchestrator | 2025-09-16 00:47:09 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:09.939558 | orchestrator | 2025-09-16 00:47:09 | INFO  | Task ad1b5405-5d3a-4605-8724-0b1539b5e8d3 is in state STARTED 2025-09-16 00:47:09.941643 | orchestrator | 2025-09-16 00:47:09 | INFO  | Task a612fdce-8132-436f-966b-d0a4e0eb1200 is in state STARTED 2025-09-16 00:47:09.942219 | orchestrator | 2025-09-16 00:47:09 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:09.943107 | orchestrator | 2025-09-16 00:47:09 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:09.943152 | orchestrator | 2025-09-16 00:47:09 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:12.976039 | orchestrator | 2025-09-16 00:47:12 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:12.976271 | orchestrator | 2025-09-16 00:47:12 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:12.976298 | orchestrator | 2025-09-16 00:47:12 | INFO  | Task ad1b5405-5d3a-4605-8724-0b1539b5e8d3 is in state STARTED 2025-09-16 00:47:12.976942 | orchestrator | 2025-09-16 00:47:12 | INFO  | Task a612fdce-8132-436f-966b-d0a4e0eb1200 is in state STARTED 2025-09-16 00:47:12.977768 | orchestrator | 2025-09-16 00:47:12 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:12.978524 | orchestrator | 2025-09-16 00:47:12 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:12.978718 | orchestrator | 2025-09-16 00:47:12 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:16.005948 | orchestrator | 2025-09-16 00:47:16 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:16.006346 | orchestrator | 2025-09-16 00:47:16 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:16.006390 | orchestrator | 2025-09-16 00:47:16 | INFO  | Task ad1b5405-5d3a-4605-8724-0b1539b5e8d3 is in state STARTED 2025-09-16 00:47:16.006412 | orchestrator | 2025-09-16 00:47:16 | INFO  | Task a612fdce-8132-436f-966b-d0a4e0eb1200 is in state SUCCESS 2025-09-16 00:47:16.006451 | orchestrator | 2025-09-16 00:47:16 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:16.006596 | orchestrator | 2025-09-16 00:47:16 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:16.006700 | orchestrator | 2025-09-16 00:47:16 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:19.051548 | orchestrator | 2025-09-16 00:47:19 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:19.052013 | orchestrator | 2025-09-16 00:47:19 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:19.052527 | orchestrator | 2025-09-16 00:47:19 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:47:19.053545 | orchestrator | 2025-09-16 00:47:19 | INFO  | Task ad1b5405-5d3a-4605-8724-0b1539b5e8d3 is in state STARTED 2025-09-16 00:47:19.055317 | orchestrator | 2025-09-16 00:47:19 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:19.057171 | orchestrator | 2025-09-16 00:47:19 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:19.061042 | orchestrator | 2025-09-16 00:47:19 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:22.154343 | orchestrator | 2025-09-16 00:47:22 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:22.154620 | orchestrator | 2025-09-16 00:47:22 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:22.156835 | orchestrator | 2025-09-16 00:47:22 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:47:22.158828 | orchestrator | 2025-09-16 00:47:22 | INFO  | Task ad1b5405-5d3a-4605-8724-0b1539b5e8d3 is in state STARTED 2025-09-16 00:47:22.164895 | orchestrator | 2025-09-16 00:47:22 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:22.166074 | orchestrator | 2025-09-16 00:47:22 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:22.166096 | orchestrator | 2025-09-16 00:47:22 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:25.204655 | orchestrator | 2025-09-16 00:47:25 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:25.204830 | orchestrator | 2025-09-16 00:47:25 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:25.204848 | orchestrator | 2025-09-16 00:47:25 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:47:25.204860 | orchestrator | 2025-09-16 00:47:25 | INFO  | Task ad1b5405-5d3a-4605-8724-0b1539b5e8d3 is in state STARTED 2025-09-16 00:47:25.204871 | orchestrator | 2025-09-16 00:47:25 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:25.204881 | orchestrator | 2025-09-16 00:47:25 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:25.204892 | orchestrator | 2025-09-16 00:47:25 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:28.254249 | orchestrator | 2025-09-16 00:47:28 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:28.254362 | orchestrator | 2025-09-16 00:47:28 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:28.254375 | orchestrator | 2025-09-16 00:47:28 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:47:28.254386 | orchestrator | 2025-09-16 00:47:28 | INFO  | Task ad1b5405-5d3a-4605-8724-0b1539b5e8d3 is in state STARTED 2025-09-16 00:47:28.254396 | orchestrator | 2025-09-16 00:47:28 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:28.254406 | orchestrator | 2025-09-16 00:47:28 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:28.254416 | orchestrator | 2025-09-16 00:47:28 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:31.282849 | orchestrator | 2025-09-16 00:47:31 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:31.283262 | orchestrator | 2025-09-16 00:47:31 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:31.283644 | orchestrator | 2025-09-16 00:47:31 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:47:31.284362 | orchestrator | 2025-09-16 00:47:31 | INFO  | Task ad1b5405-5d3a-4605-8724-0b1539b5e8d3 is in state SUCCESS 2025-09-16 00:47:31.286211 | orchestrator | 2025-09-16 00:47:31.286268 | orchestrator | 2025-09-16 00:47:31.286286 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 00:47:31.286305 | orchestrator | 2025-09-16 00:47:31.286323 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 00:47:31.286341 | orchestrator | Tuesday 16 September 2025 00:47:06 +0000 (0:00:00.299) 0:00:00.299 ***** 2025-09-16 00:47:31.286358 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:47:31.286376 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:47:31.286393 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:47:31.286409 | orchestrator | 2025-09-16 00:47:31.286426 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 00:47:31.286442 | orchestrator | Tuesday 16 September 2025 00:47:06 +0000 (0:00:00.314) 0:00:00.613 ***** 2025-09-16 00:47:31.286460 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-16 00:47:31.286476 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-16 00:47:31.286493 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-16 00:47:31.286509 | orchestrator | 2025-09-16 00:47:31.286525 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-16 00:47:31.286541 | orchestrator | 2025-09-16 00:47:31.286558 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-16 00:47:31.286584 | orchestrator | Tuesday 16 September 2025 00:47:07 +0000 (0:00:00.583) 0:00:01.197 ***** 2025-09-16 00:47:31.286654 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:47:31.286680 | orchestrator | 2025-09-16 00:47:31.286704 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-16 00:47:31.286779 | orchestrator | Tuesday 16 September 2025 00:47:08 +0000 (0:00:00.805) 0:00:02.003 ***** 2025-09-16 00:47:31.286806 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-16 00:47:31.286831 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-16 00:47:31.286856 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-16 00:47:31.286878 | orchestrator | 2025-09-16 00:47:31.286894 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-16 00:47:31.286911 | orchestrator | Tuesday 16 September 2025 00:47:08 +0000 (0:00:00.607) 0:00:02.611 ***** 2025-09-16 00:47:31.286928 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-16 00:47:31.286946 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-16 00:47:31.286963 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-16 00:47:31.286981 | orchestrator | 2025-09-16 00:47:31.286999 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-16 00:47:31.287018 | orchestrator | Tuesday 16 September 2025 00:47:10 +0000 (0:00:02.241) 0:00:04.852 ***** 2025-09-16 00:47:31.287033 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:47:31.287045 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:47:31.287055 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:47:31.287066 | orchestrator | 2025-09-16 00:47:31.287077 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-16 00:47:31.287088 | orchestrator | Tuesday 16 September 2025 00:47:12 +0000 (0:00:01.953) 0:00:06.806 ***** 2025-09-16 00:47:31.287099 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:47:31.287109 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:47:31.287120 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:47:31.287131 | orchestrator | 2025-09-16 00:47:31.287141 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:47:31.287153 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:47:31.287165 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:47:31.287176 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:47:31.287187 | orchestrator | 2025-09-16 00:47:31.287198 | orchestrator | 2025-09-16 00:47:31.287208 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:47:31.287219 | orchestrator | Tuesday 16 September 2025 00:47:15 +0000 (0:00:02.188) 0:00:08.994 ***** 2025-09-16 00:47:31.287230 | orchestrator | =============================================================================== 2025-09-16 00:47:31.287241 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.24s 2025-09-16 00:47:31.287252 | orchestrator | memcached : Restart memcached container --------------------------------- 2.19s 2025-09-16 00:47:31.287262 | orchestrator | memcached : Check memcached container ----------------------------------- 1.95s 2025-09-16 00:47:31.287273 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.81s 2025-09-16 00:47:31.287283 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.61s 2025-09-16 00:47:31.287294 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-09-16 00:47:31.287305 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-09-16 00:47:31.287315 | orchestrator | 2025-09-16 00:47:31.287326 | orchestrator | 2025-09-16 00:47:31.287337 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 00:47:31.287347 | orchestrator | 2025-09-16 00:47:31.287370 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 00:47:31.287381 | orchestrator | Tuesday 16 September 2025 00:47:05 +0000 (0:00:00.250) 0:00:00.250 ***** 2025-09-16 00:47:31.287392 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:47:31.287403 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:47:31.287414 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:47:31.287425 | orchestrator | 2025-09-16 00:47:31.287436 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 00:47:31.287465 | orchestrator | Tuesday 16 September 2025 00:47:06 +0000 (0:00:00.288) 0:00:00.539 ***** 2025-09-16 00:47:31.287477 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-16 00:47:31.287488 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-16 00:47:31.287499 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-16 00:47:31.287510 | orchestrator | 2025-09-16 00:47:31.287521 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-16 00:47:31.287531 | orchestrator | 2025-09-16 00:47:31.287542 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-16 00:47:31.287553 | orchestrator | Tuesday 16 September 2025 00:47:06 +0000 (0:00:00.439) 0:00:00.979 ***** 2025-09-16 00:47:31.287564 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:47:31.287575 | orchestrator | 2025-09-16 00:47:31.287586 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-16 00:47:31.287597 | orchestrator | Tuesday 16 September 2025 00:47:07 +0000 (0:00:00.686) 0:00:01.665 ***** 2025-09-16 00:47:31.287612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287714 | orchestrator | 2025-09-16 00:47:31.287759 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-16 00:47:31.287780 | orchestrator | Tuesday 16 September 2025 00:47:08 +0000 (0:00:01.462) 0:00:03.128 ***** 2025-09-16 00:47:31.287800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287896 | orchestrator | 2025-09-16 00:47:31.287907 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-16 00:47:31.287918 | orchestrator | Tuesday 16 September 2025 00:47:11 +0000 (0:00:02.961) 0:00:06.090 ***** 2025-09-16 00:47:31.287930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.287990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.288002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.288014 | orchestrator | 2025-09-16 00:47:31.288031 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-16 00:47:31.288042 | orchestrator | Tuesday 16 September 2025 00:47:14 +0000 (0:00:02.791) 0:00:08.881 ***** 2025-09-16 00:47:31.288053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.288065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.288081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.288093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.288111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.288123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-16 00:47:31.288134 | orchestrator | 2025-09-16 00:47:31.288145 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-16 00:47:31.288157 | orchestrator | Tuesday 16 September 2025 00:47:16 +0000 (0:00:01.945) 0:00:10.827 ***** 2025-09-16 00:47:31.288167 | orchestrator | 2025-09-16 00:47:31.288179 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-16 00:47:31.288195 | orchestrator | Tuesday 16 September 2025 00:47:16 +0000 (0:00:00.092) 0:00:10.920 ***** 2025-09-16 00:47:31.288206 | orchestrator | 2025-09-16 00:47:31.288217 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-16 00:47:31.288228 | orchestrator | Tuesday 16 September 2025 00:47:16 +0000 (0:00:00.061) 0:00:10.981 ***** 2025-09-16 00:47:31.288239 | orchestrator | 2025-09-16 00:47:31.288249 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-16 00:47:31.288260 | orchestrator | Tuesday 16 September 2025 00:47:16 +0000 (0:00:00.070) 0:00:11.051 ***** 2025-09-16 00:47:31.288271 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:47:31.288282 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:47:31.288293 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:47:31.288303 | orchestrator | 2025-09-16 00:47:31.288314 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-16 00:47:31.288325 | orchestrator | Tuesday 16 September 2025 00:47:20 +0000 (0:00:03.578) 0:00:14.630 ***** 2025-09-16 00:47:31.288336 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:47:31.288346 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:47:31.288357 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:47:31.288368 | orchestrator | 2025-09-16 00:47:31.288378 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:47:31.288389 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:47:31.288401 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:47:31.288417 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:47:31.288428 | orchestrator | 2025-09-16 00:47:31.288439 | orchestrator | 2025-09-16 00:47:31.288450 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:47:31.288475 | orchestrator | Tuesday 16 September 2025 00:47:29 +0000 (0:00:08.894) 0:00:23.524 ***** 2025-09-16 00:47:31.288485 | orchestrator | =============================================================================== 2025-09-16 00:47:31.288496 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.89s 2025-09-16 00:47:31.288507 | orchestrator | redis : Restart redis container ----------------------------------------- 3.58s 2025-09-16 00:47:31.288518 | orchestrator | redis : Copying over default config.json files -------------------------- 2.96s 2025-09-16 00:47:31.288528 | orchestrator | redis : Copying over redis config files --------------------------------- 2.79s 2025-09-16 00:47:31.288539 | orchestrator | redis : Check redis containers ------------------------------------------ 1.95s 2025-09-16 00:47:31.288550 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.46s 2025-09-16 00:47:31.288560 | orchestrator | redis : include_tasks --------------------------------------------------- 0.69s 2025-09-16 00:47:31.288571 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-09-16 00:47:31.288581 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-09-16 00:47:31.288592 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2025-09-16 00:47:31.288603 | orchestrator | 2025-09-16 00:47:31 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:31.288614 | orchestrator | 2025-09-16 00:47:31 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:31.288625 | orchestrator | 2025-09-16 00:47:31 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:34.321489 | orchestrator | 2025-09-16 00:47:34 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:34.321595 | orchestrator | 2025-09-16 00:47:34 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:34.323622 | orchestrator | 2025-09-16 00:47:34 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:47:34.326096 | orchestrator | 2025-09-16 00:47:34 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:34.327692 | orchestrator | 2025-09-16 00:47:34 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:34.328005 | orchestrator | 2025-09-16 00:47:34 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:37.360953 | orchestrator | 2025-09-16 00:47:37 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:37.361303 | orchestrator | 2025-09-16 00:47:37 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:37.361901 | orchestrator | 2025-09-16 00:47:37 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:47:37.362384 | orchestrator | 2025-09-16 00:47:37 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:37.363924 | orchestrator | 2025-09-16 00:47:37 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:37.363957 | orchestrator | 2025-09-16 00:47:37 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:40.408868 | orchestrator | 2025-09-16 00:47:40 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:40.408975 | orchestrator | 2025-09-16 00:47:40 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:40.408990 | orchestrator | 2025-09-16 00:47:40 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:47:40.409002 | orchestrator | 2025-09-16 00:47:40 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:40.409039 | orchestrator | 2025-09-16 00:47:40 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:40.409051 | orchestrator | 2025-09-16 00:47:40 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:43.435809 | orchestrator | 2025-09-16 00:47:43 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:43.436708 | orchestrator | 2025-09-16 00:47:43 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:43.438571 | orchestrator | 2025-09-16 00:47:43 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:47:43.440259 | orchestrator | 2025-09-16 00:47:43 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:43.441529 | orchestrator | 2025-09-16 00:47:43 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:43.442094 | orchestrator | 2025-09-16 00:47:43 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:46.477980 | orchestrator | 2025-09-16 00:47:46 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:46.478121 | orchestrator | 2025-09-16 00:47:46 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:46.478857 | orchestrator | 2025-09-16 00:47:46 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:47:46.480996 | orchestrator | 2025-09-16 00:47:46 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:46.481535 | orchestrator | 2025-09-16 00:47:46 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:46.481556 | orchestrator | 2025-09-16 00:47:46 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:49.572173 | orchestrator | 2025-09-16 00:47:49 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:49.572271 | orchestrator | 2025-09-16 00:47:49 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:49.572285 | orchestrator | 2025-09-16 00:47:49 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:47:49.572297 | orchestrator | 2025-09-16 00:47:49 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:49.572308 | orchestrator | 2025-09-16 00:47:49 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:49.572319 | orchestrator | 2025-09-16 00:47:49 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:52.595205 | orchestrator | 2025-09-16 00:47:52 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:52.598833 | orchestrator | 2025-09-16 00:47:52 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:52.601267 | orchestrator | 2025-09-16 00:47:52 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:47:52.603623 | orchestrator | 2025-09-16 00:47:52 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:52.605520 | orchestrator | 2025-09-16 00:47:52 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:52.606058 | orchestrator | 2025-09-16 00:47:52 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:55.678622 | orchestrator | 2025-09-16 00:47:55 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:55.679388 | orchestrator | 2025-09-16 00:47:55 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:55.680915 | orchestrator | 2025-09-16 00:47:55 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:47:55.681952 | orchestrator | 2025-09-16 00:47:55 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:55.683190 | orchestrator | 2025-09-16 00:47:55 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:55.683672 | orchestrator | 2025-09-16 00:47:55 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:47:58.717932 | orchestrator | 2025-09-16 00:47:58 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:47:58.718379 | orchestrator | 2025-09-16 00:47:58 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:47:58.719292 | orchestrator | 2025-09-16 00:47:58 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:47:58.721190 | orchestrator | 2025-09-16 00:47:58 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:47:58.722130 | orchestrator | 2025-09-16 00:47:58 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:47:58.722761 | orchestrator | 2025-09-16 00:47:58 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:01.780225 | orchestrator | 2025-09-16 00:48:01 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:48:01.780634 | orchestrator | 2025-09-16 00:48:01 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:01.784576 | orchestrator | 2025-09-16 00:48:01 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:01.787969 | orchestrator | 2025-09-16 00:48:01 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:48:01.789155 | orchestrator | 2025-09-16 00:48:01 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:01.789702 | orchestrator | 2025-09-16 00:48:01 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:04.923342 | orchestrator | 2025-09-16 00:48:04 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state STARTED 2025-09-16 00:48:04.923449 | orchestrator | 2025-09-16 00:48:04 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:04.923464 | orchestrator | 2025-09-16 00:48:04 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:04.923476 | orchestrator | 2025-09-16 00:48:04 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:48:04.923488 | orchestrator | 2025-09-16 00:48:04 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:04.923499 | orchestrator | 2025-09-16 00:48:04 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:07.999432 | orchestrator | 2025-09-16 00:48:07 | INFO  | Task fb426eac-1bb2-4767-b418-56f355a9c808 is in state SUCCESS 2025-09-16 00:48:08.000606 | orchestrator | 2025-09-16 00:48:08.000646 | orchestrator | 2025-09-16 00:48:08.000659 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 00:48:08.000671 | orchestrator | 2025-09-16 00:48:08.000684 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 00:48:08.000695 | orchestrator | Tuesday 16 September 2025 00:47:05 +0000 (0:00:00.264) 0:00:00.264 ***** 2025-09-16 00:48:08.000707 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:08.000768 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:08.000780 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:08.000791 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:48:08.000802 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:48:08.000813 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:48:08.000852 | orchestrator | 2025-09-16 00:48:08.000864 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 00:48:08.000875 | orchestrator | Tuesday 16 September 2025 00:47:06 +0000 (0:00:00.627) 0:00:00.892 ***** 2025-09-16 00:48:08.000886 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-16 00:48:08.000898 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-16 00:48:08.000909 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-16 00:48:08.000920 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-16 00:48:08.000931 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-16 00:48:08.000942 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-16 00:48:08.000953 | orchestrator | 2025-09-16 00:48:08.000964 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-16 00:48:08.000975 | orchestrator | 2025-09-16 00:48:08.000986 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-16 00:48:08.000997 | orchestrator | Tuesday 16 September 2025 00:47:07 +0000 (0:00:00.840) 0:00:01.733 ***** 2025-09-16 00:48:08.001011 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:48:08.001024 | orchestrator | 2025-09-16 00:48:08.001036 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-16 00:48:08.001047 | orchestrator | Tuesday 16 September 2025 00:47:08 +0000 (0:00:01.405) 0:00:03.138 ***** 2025-09-16 00:48:08.001058 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-16 00:48:08.001069 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-16 00:48:08.001080 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-16 00:48:08.001091 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-16 00:48:08.001102 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-16 00:48:08.001113 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-16 00:48:08.001124 | orchestrator | 2025-09-16 00:48:08.001225 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-16 00:48:08.001241 | orchestrator | Tuesday 16 September 2025 00:47:10 +0000 (0:00:01.564) 0:00:04.702 ***** 2025-09-16 00:48:08.001252 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-16 00:48:08.001263 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-16 00:48:08.001274 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-16 00:48:08.001285 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-16 00:48:08.001296 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-16 00:48:08.001306 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-16 00:48:08.001317 | orchestrator | 2025-09-16 00:48:08.001328 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-16 00:48:08.001339 | orchestrator | Tuesday 16 September 2025 00:47:12 +0000 (0:00:01.764) 0:00:06.467 ***** 2025-09-16 00:48:08.001350 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-16 00:48:08.001360 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:08.001372 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-16 00:48:08.001383 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:08.001394 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-16 00:48:08.001404 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:08.001428 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-16 00:48:08.001440 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:08.001451 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-16 00:48:08.001546 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:08.001573 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-16 00:48:08.001584 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:08.001595 | orchestrator | 2025-09-16 00:48:08.001671 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-16 00:48:08.001686 | orchestrator | Tuesday 16 September 2025 00:47:13 +0000 (0:00:01.429) 0:00:07.897 ***** 2025-09-16 00:48:08.001697 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:08.001708 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:08.001742 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:08.001753 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:08.001764 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:08.001775 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:08.001786 | orchestrator | 2025-09-16 00:48:08.001798 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-16 00:48:08.001809 | orchestrator | Tuesday 16 September 2025 00:47:14 +0000 (0:00:00.666) 0:00:08.564 ***** 2025-09-16 00:48:08.001838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.001857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.001870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.001882 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.001900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.001922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.001942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.001955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.001967 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.001979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002002 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002072 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002086 | orchestrator | 2025-09-16 00:48:08.002098 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-16 00:48:08.002109 | orchestrator | Tuesday 16 September 2025 00:47:16 +0000 (0:00:01.994) 0:00:10.559 ***** 2025-09-16 00:48:08.002121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002163 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002175 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002224 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002270 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002301 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002315 | orchestrator | 2025-09-16 00:48:08.002328 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-16 00:48:08.002341 | orchestrator | Tuesday 16 September 2025 00:47:19 +0000 (0:00:03.323) 0:00:13.882 ***** 2025-09-16 00:48:08.002354 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:08.002367 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:08.002378 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:08.002389 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:08.002400 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:08.002411 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:08.002422 | orchestrator | 2025-09-16 00:48:08.002433 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-16 00:48:08.002444 | orchestrator | Tuesday 16 September 2025 00:47:21 +0000 (0:00:01.779) 0:00:15.661 ***** 2025-09-16 00:48:08.002456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002492 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002546 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002629 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-16 00:48:08.002641 | orchestrator | 2025-09-16 00:48:08.002652 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-16 00:48:08.002669 | orchestrator | Tuesday 16 September 2025 00:47:24 +0000 (0:00:03.164) 0:00:18.826 ***** 2025-09-16 00:48:08.002681 | orchestrator | 2025-09-16 00:48:08.002692 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-16 00:48:08.002703 | orchestrator | Tuesday 16 September 2025 00:47:24 +0000 (0:00:00.248) 0:00:19.074 ***** 2025-09-16 00:48:08.002733 | orchestrator | 2025-09-16 00:48:08.002745 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-16 00:48:08.002756 | orchestrator | Tuesday 16 September 2025 00:47:24 +0000 (0:00:00.135) 0:00:19.210 ***** 2025-09-16 00:48:08.002766 | orchestrator | 2025-09-16 00:48:08.002777 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-16 00:48:08.002788 | orchestrator | Tuesday 16 September 2025 00:47:25 +0000 (0:00:00.159) 0:00:19.370 ***** 2025-09-16 00:48:08.002799 | orchestrator | 2025-09-16 00:48:08.002809 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-16 00:48:08.002820 | orchestrator | Tuesday 16 September 2025 00:47:25 +0000 (0:00:00.276) 0:00:19.646 ***** 2025-09-16 00:48:08.002831 | orchestrator | 2025-09-16 00:48:08.002842 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-16 00:48:08.002853 | orchestrator | Tuesday 16 September 2025 00:47:25 +0000 (0:00:00.325) 0:00:19.971 ***** 2025-09-16 00:48:08.002863 | orchestrator | 2025-09-16 00:48:08.002874 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-16 00:48:08.002885 | orchestrator | Tuesday 16 September 2025 00:47:25 +0000 (0:00:00.353) 0:00:20.325 ***** 2025-09-16 00:48:08.002896 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:08.002907 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:48:08.002918 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:48:08.002928 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:48:08.002939 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:08.002950 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:08.002960 | orchestrator | 2025-09-16 00:48:08.002971 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-16 00:48:08.002982 | orchestrator | Tuesday 16 September 2025 00:47:37 +0000 (0:00:11.073) 0:00:31.398 ***** 2025-09-16 00:48:08.002993 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:08.003004 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:08.003015 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:08.003026 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:48:08.003037 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:48:08.003047 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:48:08.003058 | orchestrator | 2025-09-16 00:48:08.003069 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-16 00:48:08.003085 | orchestrator | Tuesday 16 September 2025 00:47:38 +0000 (0:00:01.205) 0:00:32.604 ***** 2025-09-16 00:48:08.003096 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:08.003107 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:08.003118 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:08.003129 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:48:08.003140 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:48:08.003151 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:48:08.003161 | orchestrator | 2025-09-16 00:48:08.003173 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-16 00:48:08.003183 | orchestrator | Tuesday 16 September 2025 00:47:43 +0000 (0:00:05.216) 0:00:37.820 ***** 2025-09-16 00:48:08.003194 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-16 00:48:08.003206 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-16 00:48:08.003217 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-16 00:48:08.003228 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-16 00:48:08.003245 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-16 00:48:08.003262 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-16 00:48:08.003273 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-16 00:48:08.003284 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-16 00:48:08.003295 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-16 00:48:08.003305 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-16 00:48:08.003316 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-16 00:48:08.003327 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-16 00:48:08.003338 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-16 00:48:08.003348 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-16 00:48:08.003359 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-16 00:48:08.003370 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-16 00:48:08.003381 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-16 00:48:08.003392 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-16 00:48:08.003402 | orchestrator | 2025-09-16 00:48:08.003413 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-16 00:48:08.003424 | orchestrator | Tuesday 16 September 2025 00:47:51 +0000 (0:00:07.683) 0:00:45.504 ***** 2025-09-16 00:48:08.003435 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-16 00:48:08.003446 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:08.003458 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-16 00:48:08.003469 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:08.003479 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-16 00:48:08.003491 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:08.003502 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-16 00:48:08.003513 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-16 00:48:08.003524 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-16 00:48:08.003535 | orchestrator | 2025-09-16 00:48:08.003546 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-16 00:48:08.003557 | orchestrator | Tuesday 16 September 2025 00:47:54 +0000 (0:00:02.856) 0:00:48.360 ***** 2025-09-16 00:48:08.003567 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-16 00:48:08.003578 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:08.003589 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-16 00:48:08.003600 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:08.003611 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-16 00:48:08.003622 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:08.003633 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-16 00:48:08.003643 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-16 00:48:08.003654 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-16 00:48:08.003673 | orchestrator | 2025-09-16 00:48:08.003684 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-16 00:48:08.003695 | orchestrator | Tuesday 16 September 2025 00:47:58 +0000 (0:00:04.009) 0:00:52.369 ***** 2025-09-16 00:48:08.003705 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:08.003746 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:08.003763 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:48:08.003774 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:08.003785 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:48:08.003796 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:48:08.003807 | orchestrator | 2025-09-16 00:48:08.003818 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:48:08.003829 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-16 00:48:08.003841 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-16 00:48:08.003852 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-16 00:48:08.003863 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-16 00:48:08.003874 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-16 00:48:08.003891 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-16 00:48:08.003902 | orchestrator | 2025-09-16 00:48:08.003913 | orchestrator | 2025-09-16 00:48:08.003924 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:48:08.003935 | orchestrator | Tuesday 16 September 2025 00:48:06 +0000 (0:00:08.945) 0:01:01.315 ***** 2025-09-16 00:48:08.003946 | orchestrator | =============================================================================== 2025-09-16 00:48:08.003957 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 14.16s 2025-09-16 00:48:08.003968 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.07s 2025-09-16 00:48:08.003978 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.68s 2025-09-16 00:48:08.003989 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.01s 2025-09-16 00:48:08.004000 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.32s 2025-09-16 00:48:08.004011 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.16s 2025-09-16 00:48:08.004021 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.86s 2025-09-16 00:48:08.004032 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.99s 2025-09-16 00:48:08.004042 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.78s 2025-09-16 00:48:08.004053 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.76s 2025-09-16 00:48:08.004064 | orchestrator | module-load : Load modules ---------------------------------------------- 1.56s 2025-09-16 00:48:08.004075 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.50s 2025-09-16 00:48:08.004085 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.43s 2025-09-16 00:48:08.004096 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.41s 2025-09-16 00:48:08.004107 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.21s 2025-09-16 00:48:08.004118 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2025-09-16 00:48:08.004135 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.67s 2025-09-16 00:48:08.004146 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.63s 2025-09-16 00:48:08.004156 | orchestrator | 2025-09-16 00:48:07 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:08.004168 | orchestrator | 2025-09-16 00:48:07 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:08.004179 | orchestrator | 2025-09-16 00:48:07 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:48:08.004190 | orchestrator | 2025-09-16 00:48:08 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:08.004201 | orchestrator | 2025-09-16 00:48:08 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:11.044197 | orchestrator | 2025-09-16 00:48:11 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:11.044302 | orchestrator | 2025-09-16 00:48:11 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:11.044316 | orchestrator | 2025-09-16 00:48:11 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:11.044328 | orchestrator | 2025-09-16 00:48:11 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:48:11.044340 | orchestrator | 2025-09-16 00:48:11 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:11.044367 | orchestrator | 2025-09-16 00:48:11 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:14.124588 | orchestrator | 2025-09-16 00:48:14 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:14.124689 | orchestrator | 2025-09-16 00:48:14 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:14.125013 | orchestrator | 2025-09-16 00:48:14 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:14.125453 | orchestrator | 2025-09-16 00:48:14 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:48:14.126338 | orchestrator | 2025-09-16 00:48:14 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:14.126364 | orchestrator | 2025-09-16 00:48:14 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:17.269364 | orchestrator | 2025-09-16 00:48:17 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:17.269767 | orchestrator | 2025-09-16 00:48:17 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:17.270812 | orchestrator | 2025-09-16 00:48:17 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:17.271617 | orchestrator | 2025-09-16 00:48:17 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state STARTED 2025-09-16 00:48:17.279028 | orchestrator | 2025-09-16 00:48:17 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:17.279052 | orchestrator | 2025-09-16 00:48:17 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:20.388888 | orchestrator | 2025-09-16 00:48:20.388977 | orchestrator | 2025-09-16 00:48:20.388991 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-16 00:48:20.389001 | orchestrator | 2025-09-16 00:48:20.389011 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-16 00:48:20.389021 | orchestrator | Tuesday 16 September 2025 00:44:30 +0000 (0:00:00.238) 0:00:00.238 ***** 2025-09-16 00:48:20.389030 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:48:20.389040 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:48:20.389067 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:48:20.389076 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.389085 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.389116 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.389126 | orchestrator | 2025-09-16 00:48:20.389134 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-16 00:48:20.389143 | orchestrator | Tuesday 16 September 2025 00:44:31 +0000 (0:00:00.918) 0:00:01.157 ***** 2025-09-16 00:48:20.389152 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:20.389161 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:20.389170 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:20.389178 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.389187 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.389196 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.389204 | orchestrator | 2025-09-16 00:48:20.389213 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-16 00:48:20.389221 | orchestrator | Tuesday 16 September 2025 00:44:32 +0000 (0:00:00.778) 0:00:01.936 ***** 2025-09-16 00:48:20.389230 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:20.389239 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:20.389247 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:20.389256 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.389264 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.389273 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.389281 | orchestrator | 2025-09-16 00:48:20.389290 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-16 00:48:20.389299 | orchestrator | Tuesday 16 September 2025 00:44:33 +0000 (0:00:00.738) 0:00:02.674 ***** 2025-09-16 00:48:20.389307 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:48:20.389316 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:48:20.389324 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:20.389333 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:20.389341 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:48:20.389350 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.389358 | orchestrator | 2025-09-16 00:48:20.389367 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-16 00:48:20.389376 | orchestrator | Tuesday 16 September 2025 00:44:35 +0000 (0:00:02.571) 0:00:05.246 ***** 2025-09-16 00:48:20.389384 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:48:20.389393 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:48:20.389401 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:48:20.389410 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.389418 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:20.389426 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:20.389435 | orchestrator | 2025-09-16 00:48:20.389445 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-16 00:48:20.389456 | orchestrator | Tuesday 16 September 2025 00:44:36 +0000 (0:00:00.914) 0:00:06.161 ***** 2025-09-16 00:48:20.389465 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:48:20.389475 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:48:20.389485 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:48:20.389495 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.389505 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:20.389514 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:20.389524 | orchestrator | 2025-09-16 00:48:20.389534 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-16 00:48:20.389545 | orchestrator | Tuesday 16 September 2025 00:44:37 +0000 (0:00:01.119) 0:00:07.280 ***** 2025-09-16 00:48:20.389555 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:20.389564 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:20.389583 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:20.389592 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.389602 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.389612 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.389635 | orchestrator | 2025-09-16 00:48:20.389645 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-16 00:48:20.389656 | orchestrator | Tuesday 16 September 2025 00:44:38 +0000 (0:00:00.516) 0:00:07.797 ***** 2025-09-16 00:48:20.389665 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:20.389675 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:20.389685 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:20.389695 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.389704 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.389736 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.389746 | orchestrator | 2025-09-16 00:48:20.389756 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-16 00:48:20.389766 | orchestrator | Tuesday 16 September 2025 00:44:39 +0000 (0:00:00.867) 0:00:08.665 ***** 2025-09-16 00:48:20.389776 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-16 00:48:20.389785 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-16 00:48:20.389794 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:20.389803 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-16 00:48:20.389811 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-16 00:48:20.389820 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:20.389828 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-16 00:48:20.389837 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-16 00:48:20.389846 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:20.389855 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-16 00:48:20.389878 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-16 00:48:20.389887 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.389896 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-16 00:48:20.389905 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-16 00:48:20.389913 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.389921 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-16 00:48:20.389930 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-16 00:48:20.389938 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.389947 | orchestrator | 2025-09-16 00:48:20.389956 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-16 00:48:20.389964 | orchestrator | Tuesday 16 September 2025 00:44:40 +0000 (0:00:00.945) 0:00:09.611 ***** 2025-09-16 00:48:20.389973 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:20.389981 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:20.389990 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:20.389998 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.390007 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.390015 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.390086 | orchestrator | 2025-09-16 00:48:20.390095 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-16 00:48:20.390105 | orchestrator | Tuesday 16 September 2025 00:44:41 +0000 (0:00:01.145) 0:00:10.756 ***** 2025-09-16 00:48:20.390145 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:48:20.390154 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:48:20.390162 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:48:20.390171 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.390179 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.390188 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.390196 | orchestrator | 2025-09-16 00:48:20.390205 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-16 00:48:20.390221 | orchestrator | Tuesday 16 September 2025 00:44:42 +0000 (0:00:00.983) 0:00:11.740 ***** 2025-09-16 00:48:20.390230 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:48:20.390238 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:48:20.390247 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:20.390255 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.390264 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:20.390272 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:48:20.390281 | orchestrator | 2025-09-16 00:48:20.390289 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-16 00:48:20.390298 | orchestrator | Tuesday 16 September 2025 00:44:48 +0000 (0:00:05.677) 0:00:17.418 ***** 2025-09-16 00:48:20.390306 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:20.390315 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:20.390323 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:20.390332 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.390340 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.390349 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.390357 | orchestrator | 2025-09-16 00:48:20.390366 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-16 00:48:20.390374 | orchestrator | Tuesday 16 September 2025 00:44:49 +0000 (0:00:01.549) 0:00:18.967 ***** 2025-09-16 00:48:20.390383 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:20.390391 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:20.390400 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:20.390408 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.390417 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.390425 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.390434 | orchestrator | 2025-09-16 00:48:20.390443 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-16 00:48:20.390453 | orchestrator | Tuesday 16 September 2025 00:44:52 +0000 (0:00:02.746) 0:00:21.713 ***** 2025-09-16 00:48:20.390466 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:48:20.390475 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:48:20.390483 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:48:20.390492 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.390500 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.390509 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.390517 | orchestrator | 2025-09-16 00:48:20.390526 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-16 00:48:20.390535 | orchestrator | Tuesday 16 September 2025 00:44:54 +0000 (0:00:01.798) 0:00:23.512 ***** 2025-09-16 00:48:20.390543 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-16 00:48:20.390552 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-16 00:48:20.390561 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-16 00:48:20.390569 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-16 00:48:20.390578 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-16 00:48:20.390586 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-16 00:48:20.390595 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-16 00:48:20.390603 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-16 00:48:20.390611 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-16 00:48:20.390620 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-16 00:48:20.390628 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-16 00:48:20.390637 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-16 00:48:20.390645 | orchestrator | 2025-09-16 00:48:20.390654 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-16 00:48:20.390663 | orchestrator | Tuesday 16 September 2025 00:44:57 +0000 (0:00:03.289) 0:00:26.802 ***** 2025-09-16 00:48:20.390671 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:48:20.390684 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:48:20.390693 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:48:20.390702 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.390724 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:20.390734 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:20.390742 | orchestrator | 2025-09-16 00:48:20.390758 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-16 00:48:20.390767 | orchestrator | 2025-09-16 00:48:20.390775 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-16 00:48:20.390784 | orchestrator | Tuesday 16 September 2025 00:44:59 +0000 (0:00:02.284) 0:00:29.086 ***** 2025-09-16 00:48:20.390793 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.390801 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.390810 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.390818 | orchestrator | 2025-09-16 00:48:20.390827 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-16 00:48:20.390836 | orchestrator | Tuesday 16 September 2025 00:45:00 +0000 (0:00:01.092) 0:00:30.178 ***** 2025-09-16 00:48:20.390844 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.390853 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.390861 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.390870 | orchestrator | 2025-09-16 00:48:20.390878 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-16 00:48:20.390887 | orchestrator | Tuesday 16 September 2025 00:45:02 +0000 (0:00:01.390) 0:00:31.569 ***** 2025-09-16 00:48:20.390895 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.390904 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.390912 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.390921 | orchestrator | 2025-09-16 00:48:20.390929 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-16 00:48:20.390938 | orchestrator | Tuesday 16 September 2025 00:45:03 +0000 (0:00:01.418) 0:00:32.988 ***** 2025-09-16 00:48:20.390947 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.390955 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.390964 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.390972 | orchestrator | 2025-09-16 00:48:20.390981 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-16 00:48:20.390989 | orchestrator | Tuesday 16 September 2025 00:45:05 +0000 (0:00:01.582) 0:00:34.570 ***** 2025-09-16 00:48:20.390998 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.391006 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.391015 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.391024 | orchestrator | 2025-09-16 00:48:20.391032 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-16 00:48:20.391041 | orchestrator | Tuesday 16 September 2025 00:45:06 +0000 (0:00:00.908) 0:00:35.479 ***** 2025-09-16 00:48:20.391049 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.391058 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.391066 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.391075 | orchestrator | 2025-09-16 00:48:20.391083 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-16 00:48:20.391092 | orchestrator | Tuesday 16 September 2025 00:45:07 +0000 (0:00:01.024) 0:00:36.504 ***** 2025-09-16 00:48:20.391100 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:20.391109 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:20.391118 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.391126 | orchestrator | 2025-09-16 00:48:20.391135 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-16 00:48:20.391144 | orchestrator | Tuesday 16 September 2025 00:45:09 +0000 (0:00:01.916) 0:00:38.421 ***** 2025-09-16 00:48:20.391152 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:48:20.391161 | orchestrator | 2025-09-16 00:48:20.391169 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-16 00:48:20.391183 | orchestrator | Tuesday 16 September 2025 00:45:10 +0000 (0:00:01.527) 0:00:39.949 ***** 2025-09-16 00:48:20.391192 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.391200 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.391209 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.391217 | orchestrator | 2025-09-16 00:48:20.391226 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-16 00:48:20.391234 | orchestrator | Tuesday 16 September 2025 00:45:13 +0000 (0:00:03.048) 0:00:42.998 ***** 2025-09-16 00:48:20.391243 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.391255 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.391264 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.391273 | orchestrator | 2025-09-16 00:48:20.391281 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-16 00:48:20.391290 | orchestrator | Tuesday 16 September 2025 00:45:14 +0000 (0:00:00.627) 0:00:43.625 ***** 2025-09-16 00:48:20.391298 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.391307 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.391315 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.391324 | orchestrator | 2025-09-16 00:48:20.391332 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-16 00:48:20.391341 | orchestrator | Tuesday 16 September 2025 00:45:15 +0000 (0:00:01.372) 0:00:44.997 ***** 2025-09-16 00:48:20.391349 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.391358 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.391366 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.391375 | orchestrator | 2025-09-16 00:48:20.391384 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-16 00:48:20.391392 | orchestrator | Tuesday 16 September 2025 00:45:17 +0000 (0:00:01.710) 0:00:46.708 ***** 2025-09-16 00:48:20.391400 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.391409 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.391418 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.391426 | orchestrator | 2025-09-16 00:48:20.391434 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-16 00:48:20.391443 | orchestrator | Tuesday 16 September 2025 00:45:17 +0000 (0:00:00.594) 0:00:47.302 ***** 2025-09-16 00:48:20.391452 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.391460 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.391469 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.391477 | orchestrator | 2025-09-16 00:48:20.391486 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-16 00:48:20.391494 | orchestrator | Tuesday 16 September 2025 00:45:18 +0000 (0:00:00.408) 0:00:47.710 ***** 2025-09-16 00:48:20.391503 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.391511 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:20.391520 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:20.391529 | orchestrator | 2025-09-16 00:48:20.391542 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-16 00:48:20.391551 | orchestrator | Tuesday 16 September 2025 00:45:21 +0000 (0:00:03.001) 0:00:50.713 ***** 2025-09-16 00:48:20.391560 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-16 00:48:20.391569 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-16 00:48:20.391578 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-16 00:48:20.391587 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-16 00:48:20.391596 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-16 00:48:20.391610 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-16 00:48:20.391618 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-16 00:48:20.391627 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-16 00:48:20.391636 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-16 00:48:20.391644 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-16 00:48:20.391653 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-16 00:48:20.391662 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-16 00:48:20.391671 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-16 00:48:20.391679 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-16 00:48:20.391688 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-16 00:48:20.391697 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.391706 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.391771 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.391782 | orchestrator | 2025-09-16 00:48:20.391790 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-16 00:48:20.391799 | orchestrator | Tuesday 16 September 2025 00:46:16 +0000 (0:00:54.869) 0:01:45.582 ***** 2025-09-16 00:48:20.391808 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.391821 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.391830 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.391838 | orchestrator | 2025-09-16 00:48:20.391847 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-16 00:48:20.391856 | orchestrator | Tuesday 16 September 2025 00:46:16 +0000 (0:00:00.399) 0:01:45.982 ***** 2025-09-16 00:48:20.391864 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.391873 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:20.391881 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:20.391890 | orchestrator | 2025-09-16 00:48:20.391899 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-16 00:48:20.391907 | orchestrator | Tuesday 16 September 2025 00:46:17 +0000 (0:00:00.970) 0:01:46.953 ***** 2025-09-16 00:48:20.391916 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:20.391925 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.391933 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:20.391942 | orchestrator | 2025-09-16 00:48:20.391950 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-16 00:48:20.391959 | orchestrator | Tuesday 16 September 2025 00:46:18 +0000 (0:00:01.269) 0:01:48.222 ***** 2025-09-16 00:48:20.391968 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.391977 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:20.391985 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:20.391994 | orchestrator | 2025-09-16 00:48:20.392002 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-16 00:48:20.392011 | orchestrator | Tuesday 16 September 2025 00:46:44 +0000 (0:00:26.043) 0:02:14.266 ***** 2025-09-16 00:48:20.392020 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.392029 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.392043 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.392051 | orchestrator | 2025-09-16 00:48:20.392060 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-16 00:48:20.392069 | orchestrator | Tuesday 16 September 2025 00:46:45 +0000 (0:00:00.691) 0:02:14.958 ***** 2025-09-16 00:48:20.392078 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.392086 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.392095 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.392103 | orchestrator | 2025-09-16 00:48:20.392117 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-16 00:48:20.392126 | orchestrator | Tuesday 16 September 2025 00:46:46 +0000 (0:00:00.671) 0:02:15.629 ***** 2025-09-16 00:48:20.392134 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.392143 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:20.392151 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:20.392160 | orchestrator | 2025-09-16 00:48:20.392168 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-16 00:48:20.392177 | orchestrator | Tuesday 16 September 2025 00:46:46 +0000 (0:00:00.585) 0:02:16.214 ***** 2025-09-16 00:48:20.392186 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.392194 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.392203 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.392211 | orchestrator | 2025-09-16 00:48:20.392219 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-16 00:48:20.392228 | orchestrator | Tuesday 16 September 2025 00:46:47 +0000 (0:00:00.832) 0:02:17.046 ***** 2025-09-16 00:48:20.392237 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.392245 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.392253 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.392262 | orchestrator | 2025-09-16 00:48:20.392271 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-16 00:48:20.392279 | orchestrator | Tuesday 16 September 2025 00:46:47 +0000 (0:00:00.255) 0:02:17.302 ***** 2025-09-16 00:48:20.392288 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.392296 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:20.392305 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:20.392313 | orchestrator | 2025-09-16 00:48:20.392322 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-16 00:48:20.392330 | orchestrator | Tuesday 16 September 2025 00:46:48 +0000 (0:00:00.570) 0:02:17.872 ***** 2025-09-16 00:48:20.392339 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.392347 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:20.392356 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:20.392364 | orchestrator | 2025-09-16 00:48:20.392373 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-16 00:48:20.392382 | orchestrator | Tuesday 16 September 2025 00:46:49 +0000 (0:00:00.574) 0:02:18.447 ***** 2025-09-16 00:48:20.392390 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.392399 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:20.392407 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:20.392416 | orchestrator | 2025-09-16 00:48:20.392424 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-16 00:48:20.392433 | orchestrator | Tuesday 16 September 2025 00:46:50 +0000 (0:00:00.999) 0:02:19.447 ***** 2025-09-16 00:48:20.392441 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:48:20.392450 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:48:20.392458 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:48:20.392467 | orchestrator | 2025-09-16 00:48:20.392475 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-16 00:48:20.392484 | orchestrator | Tuesday 16 September 2025 00:46:50 +0000 (0:00:00.787) 0:02:20.235 ***** 2025-09-16 00:48:20.392493 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.392501 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.392510 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.392518 | orchestrator | 2025-09-16 00:48:20.392531 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-16 00:48:20.392540 | orchestrator | Tuesday 16 September 2025 00:46:51 +0000 (0:00:00.246) 0:02:20.481 ***** 2025-09-16 00:48:20.392548 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.392557 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.392565 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.392574 | orchestrator | 2025-09-16 00:48:20.392582 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-16 00:48:20.392591 | orchestrator | Tuesday 16 September 2025 00:46:51 +0000 (0:00:00.239) 0:02:20.721 ***** 2025-09-16 00:48:20.392600 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.392608 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.392617 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.392625 | orchestrator | 2025-09-16 00:48:20.392634 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-16 00:48:20.392643 | orchestrator | Tuesday 16 September 2025 00:46:52 +0000 (0:00:00.876) 0:02:21.597 ***** 2025-09-16 00:48:20.392651 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.392660 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.392669 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.392677 | orchestrator | 2025-09-16 00:48:20.392686 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-16 00:48:20.392695 | orchestrator | Tuesday 16 September 2025 00:46:52 +0000 (0:00:00.685) 0:02:22.283 ***** 2025-09-16 00:48:20.392703 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-16 00:48:20.392731 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-16 00:48:20.392740 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-16 00:48:20.392749 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-16 00:48:20.392758 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-16 00:48:20.392766 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-16 00:48:20.392775 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-16 00:48:20.392784 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-16 00:48:20.392792 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-16 00:48:20.392805 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-16 00:48:20.392814 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-16 00:48:20.392823 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-16 00:48:20.392831 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-16 00:48:20.392840 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-16 00:48:20.392848 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-16 00:48:20.392857 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-16 00:48:20.392865 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-16 00:48:20.392874 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-16 00:48:20.392882 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-16 00:48:20.392891 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-16 00:48:20.392905 | orchestrator | 2025-09-16 00:48:20.392914 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-16 00:48:20.392922 | orchestrator | 2025-09-16 00:48:20.392930 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-16 00:48:20.392939 | orchestrator | Tuesday 16 September 2025 00:46:56 +0000 (0:00:03.295) 0:02:25.578 ***** 2025-09-16 00:48:20.392948 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:48:20.392956 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:48:20.392965 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:48:20.392973 | orchestrator | 2025-09-16 00:48:20.392982 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-16 00:48:20.392990 | orchestrator | Tuesday 16 September 2025 00:46:56 +0000 (0:00:00.396) 0:02:25.974 ***** 2025-09-16 00:48:20.392999 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:48:20.393007 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:48:20.393016 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:48:20.393024 | orchestrator | 2025-09-16 00:48:20.393536 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-16 00:48:20.393552 | orchestrator | Tuesday 16 September 2025 00:46:57 +0000 (0:00:00.587) 0:02:26.561 ***** 2025-09-16 00:48:20.393560 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:48:20.393569 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:48:20.393577 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:48:20.393586 | orchestrator | 2025-09-16 00:48:20.393595 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-16 00:48:20.393603 | orchestrator | Tuesday 16 September 2025 00:46:57 +0000 (0:00:00.297) 0:02:26.859 ***** 2025-09-16 00:48:20.393612 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:48:20.393621 | orchestrator | 2025-09-16 00:48:20.393629 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-16 00:48:20.393638 | orchestrator | Tuesday 16 September 2025 00:46:58 +0000 (0:00:00.652) 0:02:27.512 ***** 2025-09-16 00:48:20.393647 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:20.393656 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:20.393665 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:20.393674 | orchestrator | 2025-09-16 00:48:20.393682 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-16 00:48:20.393691 | orchestrator | Tuesday 16 September 2025 00:46:58 +0000 (0:00:00.312) 0:02:27.825 ***** 2025-09-16 00:48:20.393699 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:20.393708 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:20.393735 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:20.393744 | orchestrator | 2025-09-16 00:48:20.393753 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-16 00:48:20.393761 | orchestrator | Tuesday 16 September 2025 00:46:58 +0000 (0:00:00.293) 0:02:28.119 ***** 2025-09-16 00:48:20.393770 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:20.393779 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:20.393787 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:20.393796 | orchestrator | 2025-09-16 00:48:20.393804 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-16 00:48:20.393813 | orchestrator | Tuesday 16 September 2025 00:46:59 +0000 (0:00:00.286) 0:02:28.405 ***** 2025-09-16 00:48:20.393822 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:48:20.393830 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:48:20.393839 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:48:20.393847 | orchestrator | 2025-09-16 00:48:20.393856 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-16 00:48:20.393864 | orchestrator | Tuesday 16 September 2025 00:46:59 +0000 (0:00:00.688) 0:02:29.094 ***** 2025-09-16 00:48:20.393873 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:48:20.393882 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:48:20.393890 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:48:20.393908 | orchestrator | 2025-09-16 00:48:20.393917 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-16 00:48:20.393926 | orchestrator | Tuesday 16 September 2025 00:47:01 +0000 (0:00:01.466) 0:02:30.561 ***** 2025-09-16 00:48:20.393934 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:48:20.393943 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:48:20.393951 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:48:20.393960 | orchestrator | 2025-09-16 00:48:20.393968 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-16 00:48:20.393977 | orchestrator | Tuesday 16 September 2025 00:47:02 +0000 (0:00:01.777) 0:02:32.339 ***** 2025-09-16 00:48:20.393986 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:48:20.393994 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:48:20.394003 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:48:20.394011 | orchestrator | 2025-09-16 00:48:20.394060 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-16 00:48:20.394070 | orchestrator | 2025-09-16 00:48:20.394078 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-16 00:48:20.394087 | orchestrator | Tuesday 16 September 2025 00:47:16 +0000 (0:00:13.060) 0:02:45.399 ***** 2025-09-16 00:48:20.394096 | orchestrator | ok: [testbed-manager] 2025-09-16 00:48:20.394104 | orchestrator | 2025-09-16 00:48:20.394118 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-16 00:48:20.394127 | orchestrator | Tuesday 16 September 2025 00:47:16 +0000 (0:00:00.868) 0:02:46.267 ***** 2025-09-16 00:48:20.394136 | orchestrator | changed: [testbed-manager] 2025-09-16 00:48:20.394144 | orchestrator | 2025-09-16 00:48:20.394153 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-16 00:48:20.394161 | orchestrator | Tuesday 16 September 2025 00:47:17 +0000 (0:00:00.470) 0:02:46.737 ***** 2025-09-16 00:48:20.394170 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-16 00:48:20.394178 | orchestrator | 2025-09-16 00:48:20.394187 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-16 00:48:20.394195 | orchestrator | Tuesday 16 September 2025 00:47:17 +0000 (0:00:00.548) 0:02:47.285 ***** 2025-09-16 00:48:20.394204 | orchestrator | changed: [testbed-manager] 2025-09-16 00:48:20.394212 | orchestrator | 2025-09-16 00:48:20.394221 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-16 00:48:20.394229 | orchestrator | Tuesday 16 September 2025 00:47:18 +0000 (0:00:01.074) 0:02:48.360 ***** 2025-09-16 00:48:20.394238 | orchestrator | changed: [testbed-manager] 2025-09-16 00:48:20.394246 | orchestrator | 2025-09-16 00:48:20.394255 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-16 00:48:20.394263 | orchestrator | Tuesday 16 September 2025 00:47:19 +0000 (0:00:00.524) 0:02:48.885 ***** 2025-09-16 00:48:20.394272 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-16 00:48:20.394280 | orchestrator | 2025-09-16 00:48:20.394289 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-16 00:48:20.394298 | orchestrator | Tuesday 16 September 2025 00:47:21 +0000 (0:00:01.508) 0:02:50.393 ***** 2025-09-16 00:48:20.394306 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-16 00:48:20.394315 | orchestrator | 2025-09-16 00:48:20.394323 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-16 00:48:20.394332 | orchestrator | Tuesday 16 September 2025 00:47:21 +0000 (0:00:00.823) 0:02:51.217 ***** 2025-09-16 00:48:20.394340 | orchestrator | changed: [testbed-manager] 2025-09-16 00:48:20.394349 | orchestrator | 2025-09-16 00:48:20.394357 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-16 00:48:20.394366 | orchestrator | Tuesday 16 September 2025 00:47:22 +0000 (0:00:00.597) 0:02:51.815 ***** 2025-09-16 00:48:20.394374 | orchestrator | changed: [testbed-manager] 2025-09-16 00:48:20.394383 | orchestrator | 2025-09-16 00:48:20.394392 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-16 00:48:20.394406 | orchestrator | 2025-09-16 00:48:20.394415 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-16 00:48:20.394423 | orchestrator | Tuesday 16 September 2025 00:47:23 +0000 (0:00:00.668) 0:02:52.484 ***** 2025-09-16 00:48:20.394432 | orchestrator | ok: [testbed-manager] 2025-09-16 00:48:20.394440 | orchestrator | 2025-09-16 00:48:20.394449 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-16 00:48:20.394458 | orchestrator | Tuesday 16 September 2025 00:47:23 +0000 (0:00:00.125) 0:02:52.609 ***** 2025-09-16 00:48:20.394466 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-16 00:48:20.394475 | orchestrator | 2025-09-16 00:48:20.394483 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-16 00:48:20.394492 | orchestrator | Tuesday 16 September 2025 00:47:23 +0000 (0:00:00.220) 0:02:52.830 ***** 2025-09-16 00:48:20.394500 | orchestrator | ok: [testbed-manager] 2025-09-16 00:48:20.394509 | orchestrator | 2025-09-16 00:48:20.394517 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-16 00:48:20.394526 | orchestrator | Tuesday 16 September 2025 00:47:24 +0000 (0:00:00.886) 0:02:53.717 ***** 2025-09-16 00:48:20.394535 | orchestrator | ok: [testbed-manager] 2025-09-16 00:48:20.394543 | orchestrator | 2025-09-16 00:48:20.394552 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-16 00:48:20.394560 | orchestrator | Tuesday 16 September 2025 00:47:25 +0000 (0:00:01.415) 0:02:55.133 ***** 2025-09-16 00:48:20.394569 | orchestrator | changed: [testbed-manager] 2025-09-16 00:48:20.394577 | orchestrator | 2025-09-16 00:48:20.394586 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-16 00:48:20.394594 | orchestrator | Tuesday 16 September 2025 00:47:26 +0000 (0:00:00.787) 0:02:55.920 ***** 2025-09-16 00:48:20.394603 | orchestrator | ok: [testbed-manager] 2025-09-16 00:48:20.394612 | orchestrator | 2025-09-16 00:48:20.394620 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-16 00:48:20.394629 | orchestrator | Tuesday 16 September 2025 00:47:26 +0000 (0:00:00.372) 0:02:56.293 ***** 2025-09-16 00:48:20.394637 | orchestrator | changed: [testbed-manager] 2025-09-16 00:48:20.394646 | orchestrator | 2025-09-16 00:48:20.394654 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-16 00:48:20.394663 | orchestrator | Tuesday 16 September 2025 00:47:33 +0000 (0:00:06.730) 0:03:03.023 ***** 2025-09-16 00:48:20.394671 | orchestrator | changed: [testbed-manager] 2025-09-16 00:48:20.394680 | orchestrator | 2025-09-16 00:48:20.394688 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-16 00:48:20.394697 | orchestrator | Tuesday 16 September 2025 00:47:48 +0000 (0:00:15.128) 0:03:18.152 ***** 2025-09-16 00:48:20.394705 | orchestrator | ok: [testbed-manager] 2025-09-16 00:48:20.394776 | orchestrator | 2025-09-16 00:48:20.394793 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-16 00:48:20.394805 | orchestrator | 2025-09-16 00:48:20.394817 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-16 00:48:20.394838 | orchestrator | Tuesday 16 September 2025 00:47:49 +0000 (0:00:00.466) 0:03:18.618 ***** 2025-09-16 00:48:20.394852 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.394861 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.394870 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.394878 | orchestrator | 2025-09-16 00:48:20.394887 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-16 00:48:20.394895 | orchestrator | Tuesday 16 September 2025 00:47:49 +0000 (0:00:00.391) 0:03:19.010 ***** 2025-09-16 00:48:20.394904 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.394921 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.394930 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.394938 | orchestrator | 2025-09-16 00:48:20.394947 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-16 00:48:20.394962 | orchestrator | Tuesday 16 September 2025 00:47:49 +0000 (0:00:00.333) 0:03:19.344 ***** 2025-09-16 00:48:20.394971 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:48:20.394979 | orchestrator | 2025-09-16 00:48:20.394988 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-16 00:48:20.394996 | orchestrator | Tuesday 16 September 2025 00:47:50 +0000 (0:00:00.852) 0:03:20.196 ***** 2025-09-16 00:48:20.395005 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395013 | orchestrator | 2025-09-16 00:48:20.395022 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-16 00:48:20.395031 | orchestrator | Tuesday 16 September 2025 00:47:51 +0000 (0:00:00.194) 0:03:20.390 ***** 2025-09-16 00:48:20.395039 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395048 | orchestrator | 2025-09-16 00:48:20.395056 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-16 00:48:20.395065 | orchestrator | Tuesday 16 September 2025 00:47:51 +0000 (0:00:00.187) 0:03:20.578 ***** 2025-09-16 00:48:20.395073 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395082 | orchestrator | 2025-09-16 00:48:20.395090 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-16 00:48:20.395099 | orchestrator | Tuesday 16 September 2025 00:47:51 +0000 (0:00:00.167) 0:03:20.746 ***** 2025-09-16 00:48:20.395108 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395116 | orchestrator | 2025-09-16 00:48:20.395125 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-16 00:48:20.395133 | orchestrator | Tuesday 16 September 2025 00:47:51 +0000 (0:00:00.196) 0:03:20.942 ***** 2025-09-16 00:48:20.395142 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395150 | orchestrator | 2025-09-16 00:48:20.395159 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-16 00:48:20.395167 | orchestrator | Tuesday 16 September 2025 00:47:51 +0000 (0:00:00.187) 0:03:21.129 ***** 2025-09-16 00:48:20.395175 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395183 | orchestrator | 2025-09-16 00:48:20.395191 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-16 00:48:20.395198 | orchestrator | Tuesday 16 September 2025 00:47:51 +0000 (0:00:00.242) 0:03:21.372 ***** 2025-09-16 00:48:20.395206 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395214 | orchestrator | 2025-09-16 00:48:20.395222 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-16 00:48:20.395230 | orchestrator | Tuesday 16 September 2025 00:47:52 +0000 (0:00:00.185) 0:03:21.557 ***** 2025-09-16 00:48:20.395237 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395245 | orchestrator | 2025-09-16 00:48:20.395253 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-16 00:48:20.395261 | orchestrator | Tuesday 16 September 2025 00:47:52 +0000 (0:00:00.174) 0:03:21.732 ***** 2025-09-16 00:48:20.395269 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395276 | orchestrator | 2025-09-16 00:48:20.395284 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-16 00:48:20.395292 | orchestrator | Tuesday 16 September 2025 00:47:52 +0000 (0:00:00.166) 0:03:21.898 ***** 2025-09-16 00:48:20.395300 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-16 00:48:20.395308 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-16 00:48:20.395315 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395323 | orchestrator | 2025-09-16 00:48:20.395331 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-16 00:48:20.395339 | orchestrator | Tuesday 16 September 2025 00:47:53 +0000 (0:00:00.522) 0:03:22.420 ***** 2025-09-16 00:48:20.395347 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395354 | orchestrator | 2025-09-16 00:48:20.395362 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-16 00:48:20.395370 | orchestrator | Tuesday 16 September 2025 00:47:53 +0000 (0:00:00.192) 0:03:22.612 ***** 2025-09-16 00:48:20.395383 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395391 | orchestrator | 2025-09-16 00:48:20.395399 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-16 00:48:20.395407 | orchestrator | Tuesday 16 September 2025 00:47:53 +0000 (0:00:00.184) 0:03:22.797 ***** 2025-09-16 00:48:20.395415 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395422 | orchestrator | 2025-09-16 00:48:20.395430 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-16 00:48:20.395438 | orchestrator | Tuesday 16 September 2025 00:47:53 +0000 (0:00:00.183) 0:03:22.980 ***** 2025-09-16 00:48:20.395446 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395453 | orchestrator | 2025-09-16 00:48:20.395461 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-16 00:48:20.395469 | orchestrator | Tuesday 16 September 2025 00:47:53 +0000 (0:00:00.270) 0:03:23.251 ***** 2025-09-16 00:48:20.395477 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395484 | orchestrator | 2025-09-16 00:48:20.395492 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-16 00:48:20.395500 | orchestrator | Tuesday 16 September 2025 00:47:54 +0000 (0:00:00.218) 0:03:23.470 ***** 2025-09-16 00:48:20.395507 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395515 | orchestrator | 2025-09-16 00:48:20.395523 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-16 00:48:20.395535 | orchestrator | Tuesday 16 September 2025 00:47:54 +0000 (0:00:00.319) 0:03:23.790 ***** 2025-09-16 00:48:20.395543 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395551 | orchestrator | 2025-09-16 00:48:20.395558 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-16 00:48:20.395566 | orchestrator | Tuesday 16 September 2025 00:47:54 +0000 (0:00:00.272) 0:03:24.062 ***** 2025-09-16 00:48:20.395574 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395581 | orchestrator | 2025-09-16 00:48:20.395593 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-16 00:48:20.395601 | orchestrator | Tuesday 16 September 2025 00:47:54 +0000 (0:00:00.229) 0:03:24.292 ***** 2025-09-16 00:48:20.395609 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395616 | orchestrator | 2025-09-16 00:48:20.395624 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-16 00:48:20.395632 | orchestrator | Tuesday 16 September 2025 00:47:55 +0000 (0:00:00.194) 0:03:24.486 ***** 2025-09-16 00:48:20.395639 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395647 | orchestrator | 2025-09-16 00:48:20.395655 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-16 00:48:20.395662 | orchestrator | Tuesday 16 September 2025 00:47:55 +0000 (0:00:00.196) 0:03:24.683 ***** 2025-09-16 00:48:20.395670 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395678 | orchestrator | 2025-09-16 00:48:20.395685 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-16 00:48:20.395693 | orchestrator | Tuesday 16 September 2025 00:47:55 +0000 (0:00:00.201) 0:03:24.885 ***** 2025-09-16 00:48:20.395701 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-16 00:48:20.395708 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-16 00:48:20.395732 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-16 00:48:20.395740 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-16 00:48:20.395748 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395755 | orchestrator | 2025-09-16 00:48:20.395763 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-16 00:48:20.395770 | orchestrator | Tuesday 16 September 2025 00:47:56 +0000 (0:00:00.849) 0:03:25.734 ***** 2025-09-16 00:48:20.395778 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395786 | orchestrator | 2025-09-16 00:48:20.395793 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-16 00:48:20.395807 | orchestrator | Tuesday 16 September 2025 00:47:56 +0000 (0:00:00.214) 0:03:25.949 ***** 2025-09-16 00:48:20.395815 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395822 | orchestrator | 2025-09-16 00:48:20.395830 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-16 00:48:20.395838 | orchestrator | Tuesday 16 September 2025 00:47:56 +0000 (0:00:00.199) 0:03:26.149 ***** 2025-09-16 00:48:20.395846 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395853 | orchestrator | 2025-09-16 00:48:20.395861 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-16 00:48:20.395869 | orchestrator | Tuesday 16 September 2025 00:47:56 +0000 (0:00:00.195) 0:03:26.344 ***** 2025-09-16 00:48:20.395876 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395884 | orchestrator | 2025-09-16 00:48:20.395892 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-16 00:48:20.395899 | orchestrator | Tuesday 16 September 2025 00:47:57 +0000 (0:00:00.220) 0:03:26.565 ***** 2025-09-16 00:48:20.395907 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-16 00:48:20.395915 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-16 00:48:20.395922 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395930 | orchestrator | 2025-09-16 00:48:20.395937 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-16 00:48:20.395945 | orchestrator | Tuesday 16 September 2025 00:47:57 +0000 (0:00:00.294) 0:03:26.859 ***** 2025-09-16 00:48:20.395953 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.395960 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.395968 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.395976 | orchestrator | 2025-09-16 00:48:20.395984 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-16 00:48:20.395991 | orchestrator | Tuesday 16 September 2025 00:47:57 +0000 (0:00:00.337) 0:03:27.197 ***** 2025-09-16 00:48:20.395999 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.396007 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.396014 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.396022 | orchestrator | 2025-09-16 00:48:20.396030 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-16 00:48:20.396037 | orchestrator | 2025-09-16 00:48:20.396045 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-16 00:48:20.396053 | orchestrator | Tuesday 16 September 2025 00:47:59 +0000 (0:00:01.210) 0:03:28.407 ***** 2025-09-16 00:48:20.396060 | orchestrator | ok: [testbed-manager] 2025-09-16 00:48:20.396068 | orchestrator | 2025-09-16 00:48:20.396076 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-16 00:48:20.396083 | orchestrator | Tuesday 16 September 2025 00:47:59 +0000 (0:00:00.292) 0:03:28.700 ***** 2025-09-16 00:48:20.396091 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-16 00:48:20.396099 | orchestrator | 2025-09-16 00:48:20.396106 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-16 00:48:20.396114 | orchestrator | Tuesday 16 September 2025 00:47:59 +0000 (0:00:00.315) 0:03:29.015 ***** 2025-09-16 00:48:20.396121 | orchestrator | changed: [testbed-manager] 2025-09-16 00:48:20.396129 | orchestrator | 2025-09-16 00:48:20.396137 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-16 00:48:20.396145 | orchestrator | 2025-09-16 00:48:20.396152 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-16 00:48:20.396164 | orchestrator | Tuesday 16 September 2025 00:48:04 +0000 (0:00:05.113) 0:03:34.129 ***** 2025-09-16 00:48:20.396172 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:48:20.396180 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:48:20.396188 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:48:20.396196 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:48:20.396208 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:48:20.396216 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:48:20.396224 | orchestrator | 2025-09-16 00:48:20.396232 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-16 00:48:20.396243 | orchestrator | Tuesday 16 September 2025 00:48:05 +0000 (0:00:00.676) 0:03:34.805 ***** 2025-09-16 00:48:20.396252 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-16 00:48:20.396259 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-16 00:48:20.396267 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-16 00:48:20.396274 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-16 00:48:20.396282 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-16 00:48:20.396290 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-16 00:48:20.396297 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-16 00:48:20.396305 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-16 00:48:20.396313 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-16 00:48:20.396320 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-16 00:48:20.396328 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-16 00:48:20.396336 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-16 00:48:20.396343 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-16 00:48:20.396351 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-16 00:48:20.396358 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-16 00:48:20.396366 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-16 00:48:20.396374 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-16 00:48:20.396381 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-16 00:48:20.396389 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-16 00:48:20.396396 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-16 00:48:20.396404 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-16 00:48:20.396412 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-16 00:48:20.396419 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-16 00:48:20.396427 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-16 00:48:20.396435 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-16 00:48:20.396442 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-16 00:48:20.396450 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-16 00:48:20.396458 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-16 00:48:20.396465 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-16 00:48:20.396473 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-16 00:48:20.396481 | orchestrator | 2025-09-16 00:48:20.396488 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-16 00:48:20.396502 | orchestrator | Tuesday 16 September 2025 00:48:15 +0000 (0:00:10.534) 0:03:45.340 ***** 2025-09-16 00:48:20.396509 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:20.396517 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:20.396525 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:20.396533 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.396540 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.396548 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.396556 | orchestrator | 2025-09-16 00:48:20.396563 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-16 00:48:20.396571 | orchestrator | Tuesday 16 September 2025 00:48:16 +0000 (0:00:00.562) 0:03:45.902 ***** 2025-09-16 00:48:20.396579 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:48:20.396586 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:48:20.396594 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:48:20.396602 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:48:20.396609 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:48:20.396617 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:48:20.396624 | orchestrator | 2025-09-16 00:48:20.396632 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:48:20.396645 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:48:20.396653 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-16 00:48:20.396665 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-16 00:48:20.396674 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-16 00:48:20.396682 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-16 00:48:20.396689 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-16 00:48:20.396697 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-16 00:48:20.396705 | orchestrator | 2025-09-16 00:48:20.396726 | orchestrator | 2025-09-16 00:48:20.396734 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:48:20.396742 | orchestrator | Tuesday 16 September 2025 00:48:17 +0000 (0:00:00.558) 0:03:46.460 ***** 2025-09-16 00:48:20.396750 | orchestrator | =============================================================================== 2025-09-16 00:48:20.396757 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.87s 2025-09-16 00:48:20.396765 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.04s 2025-09-16 00:48:20.396773 | orchestrator | kubectl : Install required packages ------------------------------------ 15.13s 2025-09-16 00:48:20.396780 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 13.06s 2025-09-16 00:48:20.396788 | orchestrator | Manage labels ---------------------------------------------------------- 10.53s 2025-09-16 00:48:20.396796 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.73s 2025-09-16 00:48:20.396803 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.68s 2025-09-16 00:48:20.396811 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.11s 2025-09-16 00:48:20.396819 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.30s 2025-09-16 00:48:20.396832 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 3.29s 2025-09-16 00:48:20.396840 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.05s 2025-09-16 00:48:20.396848 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 3.00s 2025-09-16 00:48:20.396856 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.75s 2025-09-16 00:48:20.396863 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.57s 2025-09-16 00:48:20.396871 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.28s 2025-09-16 00:48:20.396879 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.92s 2025-09-16 00:48:20.396887 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 1.80s 2025-09-16 00:48:20.396894 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.78s 2025-09-16 00:48:20.396902 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.71s 2025-09-16 00:48:20.396910 | orchestrator | k3s_server : Clean previous runs of k3s-init ---------------------------- 1.58s 2025-09-16 00:48:20.396917 | orchestrator | 2025-09-16 00:48:20 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:20.396926 | orchestrator | 2025-09-16 00:48:20 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:20.396933 | orchestrator | 2025-09-16 00:48:20 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:20.396941 | orchestrator | 2025-09-16 00:48:20 | INFO  | Task 9239a9e6-9a72-447f-a949-eebe7d390138 is in state SUCCESS 2025-09-16 00:48:20.396949 | orchestrator | 2025-09-16 00:48:20 | INFO  | Task 90923e73-4134-4df0-b2b7-2879cccdeb86 is in state STARTED 2025-09-16 00:48:20.396957 | orchestrator | 2025-09-16 00:48:20 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:20.396964 | orchestrator | 2025-09-16 00:48:20 | INFO  | Task 19caf2f2-7725-4df3-b82c-3fdc623927a5 is in state STARTED 2025-09-16 00:48:20.396972 | orchestrator | 2025-09-16 00:48:20 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:23.636234 | orchestrator | 2025-09-16 00:48:23 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:23.637634 | orchestrator | 2025-09-16 00:48:23 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:23.638248 | orchestrator | 2025-09-16 00:48:23 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:23.640021 | orchestrator | 2025-09-16 00:48:23 | INFO  | Task 90923e73-4134-4df0-b2b7-2879cccdeb86 is in state STARTED 2025-09-16 00:48:23.641812 | orchestrator | 2025-09-16 00:48:23 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:23.643272 | orchestrator | 2025-09-16 00:48:23 | INFO  | Task 19caf2f2-7725-4df3-b82c-3fdc623927a5 is in state STARTED 2025-09-16 00:48:23.643301 | orchestrator | 2025-09-16 00:48:23 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:26.677991 | orchestrator | 2025-09-16 00:48:26 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:26.678140 | orchestrator | 2025-09-16 00:48:26 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:26.678512 | orchestrator | 2025-09-16 00:48:26 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:26.679118 | orchestrator | 2025-09-16 00:48:26 | INFO  | Task 90923e73-4134-4df0-b2b7-2879cccdeb86 is in state STARTED 2025-09-16 00:48:26.683986 | orchestrator | 2025-09-16 00:48:26 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:26.684325 | orchestrator | 2025-09-16 00:48:26 | INFO  | Task 19caf2f2-7725-4df3-b82c-3fdc623927a5 is in state SUCCESS 2025-09-16 00:48:26.684347 | orchestrator | 2025-09-16 00:48:26 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:29.711024 | orchestrator | 2025-09-16 00:48:29 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:29.716053 | orchestrator | 2025-09-16 00:48:29 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:29.719871 | orchestrator | 2025-09-16 00:48:29 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:29.720536 | orchestrator | 2025-09-16 00:48:29 | INFO  | Task 90923e73-4134-4df0-b2b7-2879cccdeb86 is in state SUCCESS 2025-09-16 00:48:29.722156 | orchestrator | 2025-09-16 00:48:29 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:29.722501 | orchestrator | 2025-09-16 00:48:29 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:32.770330 | orchestrator | 2025-09-16 00:48:32 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:32.772481 | orchestrator | 2025-09-16 00:48:32 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:32.773923 | orchestrator | 2025-09-16 00:48:32 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:32.775873 | orchestrator | 2025-09-16 00:48:32 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:32.775899 | orchestrator | 2025-09-16 00:48:32 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:35.817091 | orchestrator | 2025-09-16 00:48:35 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:35.818992 | orchestrator | 2025-09-16 00:48:35 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:35.821193 | orchestrator | 2025-09-16 00:48:35 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:35.822891 | orchestrator | 2025-09-16 00:48:35 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:35.822915 | orchestrator | 2025-09-16 00:48:35 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:38.871294 | orchestrator | 2025-09-16 00:48:38 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:38.872750 | orchestrator | 2025-09-16 00:48:38 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:38.874444 | orchestrator | 2025-09-16 00:48:38 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:38.876026 | orchestrator | 2025-09-16 00:48:38 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:38.876065 | orchestrator | 2025-09-16 00:48:38 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:41.915333 | orchestrator | 2025-09-16 00:48:41 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:41.915932 | orchestrator | 2025-09-16 00:48:41 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:41.916866 | orchestrator | 2025-09-16 00:48:41 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:41.918194 | orchestrator | 2025-09-16 00:48:41 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:41.918313 | orchestrator | 2025-09-16 00:48:41 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:44.968864 | orchestrator | 2025-09-16 00:48:44 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:44.969352 | orchestrator | 2025-09-16 00:48:44 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:44.970114 | orchestrator | 2025-09-16 00:48:44 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:44.971810 | orchestrator | 2025-09-16 00:48:44 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:44.971835 | orchestrator | 2025-09-16 00:48:44 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:48.019652 | orchestrator | 2025-09-16 00:48:48 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:48.021504 | orchestrator | 2025-09-16 00:48:48 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:48.024680 | orchestrator | 2025-09-16 00:48:48 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:48.027342 | orchestrator | 2025-09-16 00:48:48 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:48.027941 | orchestrator | 2025-09-16 00:48:48 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:51.095515 | orchestrator | 2025-09-16 00:48:51 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:51.097359 | orchestrator | 2025-09-16 00:48:51 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:51.100620 | orchestrator | 2025-09-16 00:48:51 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:51.101799 | orchestrator | 2025-09-16 00:48:51 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:51.103055 | orchestrator | 2025-09-16 00:48:51 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:54.150364 | orchestrator | 2025-09-16 00:48:54 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:54.155904 | orchestrator | 2025-09-16 00:48:54 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:54.162804 | orchestrator | 2025-09-16 00:48:54 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:54.172062 | orchestrator | 2025-09-16 00:48:54 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:54.172635 | orchestrator | 2025-09-16 00:48:54 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:48:57.227497 | orchestrator | 2025-09-16 00:48:57 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:48:57.230484 | orchestrator | 2025-09-16 00:48:57 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:48:57.233025 | orchestrator | 2025-09-16 00:48:57 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:48:57.235050 | orchestrator | 2025-09-16 00:48:57 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:48:57.235083 | orchestrator | 2025-09-16 00:48:57 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:00.285136 | orchestrator | 2025-09-16 00:49:00 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:00.286155 | orchestrator | 2025-09-16 00:49:00 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:00.287369 | orchestrator | 2025-09-16 00:49:00 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:49:00.288865 | orchestrator | 2025-09-16 00:49:00 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:00.288896 | orchestrator | 2025-09-16 00:49:00 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:03.327265 | orchestrator | 2025-09-16 00:49:03 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:03.330669 | orchestrator | 2025-09-16 00:49:03 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:03.333495 | orchestrator | 2025-09-16 00:49:03 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:49:03.336751 | orchestrator | 2025-09-16 00:49:03 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:03.336896 | orchestrator | 2025-09-16 00:49:03 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:06.383356 | orchestrator | 2025-09-16 00:49:06 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:06.385756 | orchestrator | 2025-09-16 00:49:06 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:06.387881 | orchestrator | 2025-09-16 00:49:06 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:49:06.390145 | orchestrator | 2025-09-16 00:49:06 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:06.390505 | orchestrator | 2025-09-16 00:49:06 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:09.424230 | orchestrator | 2025-09-16 00:49:09 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:09.426251 | orchestrator | 2025-09-16 00:49:09 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:09.427882 | orchestrator | 2025-09-16 00:49:09 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:49:09.431231 | orchestrator | 2025-09-16 00:49:09 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:09.432051 | orchestrator | 2025-09-16 00:49:09 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:12.481061 | orchestrator | 2025-09-16 00:49:12 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:12.481160 | orchestrator | 2025-09-16 00:49:12 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:12.481176 | orchestrator | 2025-09-16 00:49:12 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:49:12.482086 | orchestrator | 2025-09-16 00:49:12 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:12.482124 | orchestrator | 2025-09-16 00:49:12 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:15.519497 | orchestrator | 2025-09-16 00:49:15 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:15.522266 | orchestrator | 2025-09-16 00:49:15 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:15.522895 | orchestrator | 2025-09-16 00:49:15 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:49:15.523615 | orchestrator | 2025-09-16 00:49:15 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:15.523638 | orchestrator | 2025-09-16 00:49:15 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:18.557932 | orchestrator | 2025-09-16 00:49:18 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:18.558122 | orchestrator | 2025-09-16 00:49:18 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:18.558625 | orchestrator | 2025-09-16 00:49:18 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:49:18.562184 | orchestrator | 2025-09-16 00:49:18 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:18.562214 | orchestrator | 2025-09-16 00:49:18 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:21.591516 | orchestrator | 2025-09-16 00:49:21 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:21.592923 | orchestrator | 2025-09-16 00:49:21 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:21.593655 | orchestrator | 2025-09-16 00:49:21 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:49:21.594519 | orchestrator | 2025-09-16 00:49:21 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:21.594546 | orchestrator | 2025-09-16 00:49:21 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:24.620585 | orchestrator | 2025-09-16 00:49:24 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:24.620699 | orchestrator | 2025-09-16 00:49:24 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:24.621150 | orchestrator | 2025-09-16 00:49:24 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:49:24.621879 | orchestrator | 2025-09-16 00:49:24 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:24.621903 | orchestrator | 2025-09-16 00:49:24 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:27.651379 | orchestrator | 2025-09-16 00:49:27 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:27.651480 | orchestrator | 2025-09-16 00:49:27 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:27.652256 | orchestrator | 2025-09-16 00:49:27 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:49:27.652933 | orchestrator | 2025-09-16 00:49:27 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:27.652954 | orchestrator | 2025-09-16 00:49:27 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:30.688215 | orchestrator | 2025-09-16 00:49:30 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:30.689462 | orchestrator | 2025-09-16 00:49:30 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:30.690362 | orchestrator | 2025-09-16 00:49:30 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:49:30.691290 | orchestrator | 2025-09-16 00:49:30 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:30.691315 | orchestrator | 2025-09-16 00:49:30 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:33.728149 | orchestrator | 2025-09-16 00:49:33 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:33.730375 | orchestrator | 2025-09-16 00:49:33 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:33.731352 | orchestrator | 2025-09-16 00:49:33 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:49:33.733848 | orchestrator | 2025-09-16 00:49:33 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:33.733896 | orchestrator | 2025-09-16 00:49:33 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:36.765010 | orchestrator | 2025-09-16 00:49:36 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:36.766086 | orchestrator | 2025-09-16 00:49:36 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:36.767502 | orchestrator | 2025-09-16 00:49:36 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state STARTED 2025-09-16 00:49:36.769366 | orchestrator | 2025-09-16 00:49:36 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:36.769391 | orchestrator | 2025-09-16 00:49:36 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:39.811929 | orchestrator | 2025-09-16 00:49:39 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:39.813147 | orchestrator | 2025-09-16 00:49:39 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:39.815032 | orchestrator | 2025-09-16 00:49:39 | INFO  | Task b5ec9a87-8cf2-4282-b311-9394b57215e5 is in state SUCCESS 2025-09-16 00:49:39.816517 | orchestrator | 2025-09-16 00:49:39.816547 | orchestrator | 2025-09-16 00:49:39.816560 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-16 00:49:39.816572 | orchestrator | 2025-09-16 00:49:39.816583 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-16 00:49:39.816595 | orchestrator | Tuesday 16 September 2025 00:48:21 +0000 (0:00:00.183) 0:00:00.183 ***** 2025-09-16 00:49:39.816606 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-16 00:49:39.816618 | orchestrator | 2025-09-16 00:49:39.816629 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-16 00:49:39.816640 | orchestrator | Tuesday 16 September 2025 00:48:22 +0000 (0:00:00.833) 0:00:01.017 ***** 2025-09-16 00:49:39.816651 | orchestrator | changed: [testbed-manager] 2025-09-16 00:49:39.816663 | orchestrator | 2025-09-16 00:49:39.816674 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-16 00:49:39.816685 | orchestrator | Tuesday 16 September 2025 00:48:23 +0000 (0:00:01.185) 0:00:02.202 ***** 2025-09-16 00:49:39.816832 | orchestrator | changed: [testbed-manager] 2025-09-16 00:49:39.816848 | orchestrator | 2025-09-16 00:49:39.816859 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:49:39.816870 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:49:39.816881 | orchestrator | 2025-09-16 00:49:39.816892 | orchestrator | 2025-09-16 00:49:39.816903 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:49:39.816914 | orchestrator | Tuesday 16 September 2025 00:48:24 +0000 (0:00:00.384) 0:00:02.587 ***** 2025-09-16 00:49:39.816925 | orchestrator | =============================================================================== 2025-09-16 00:49:39.816935 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.19s 2025-09-16 00:49:39.816946 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.83s 2025-09-16 00:49:39.816957 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.39s 2025-09-16 00:49:39.816968 | orchestrator | 2025-09-16 00:49:39.816978 | orchestrator | 2025-09-16 00:49:39.816989 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-16 00:49:39.817000 | orchestrator | 2025-09-16 00:49:39.817028 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-16 00:49:39.817040 | orchestrator | Tuesday 16 September 2025 00:48:20 +0000 (0:00:00.168) 0:00:00.168 ***** 2025-09-16 00:49:39.817051 | orchestrator | ok: [testbed-manager] 2025-09-16 00:49:39.817063 | orchestrator | 2025-09-16 00:49:39.817073 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-16 00:49:39.817084 | orchestrator | Tuesday 16 September 2025 00:48:21 +0000 (0:00:00.466) 0:00:00.634 ***** 2025-09-16 00:49:39.817123 | orchestrator | ok: [testbed-manager] 2025-09-16 00:49:39.817135 | orchestrator | 2025-09-16 00:49:39.817146 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-16 00:49:39.817156 | orchestrator | Tuesday 16 September 2025 00:48:21 +0000 (0:00:00.550) 0:00:01.185 ***** 2025-09-16 00:49:39.817167 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-16 00:49:39.817177 | orchestrator | 2025-09-16 00:49:39.817188 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-16 00:49:39.817199 | orchestrator | Tuesday 16 September 2025 00:48:22 +0000 (0:00:00.640) 0:00:01.826 ***** 2025-09-16 00:49:39.817209 | orchestrator | changed: [testbed-manager] 2025-09-16 00:49:39.817220 | orchestrator | 2025-09-16 00:49:39.817231 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-16 00:49:39.817241 | orchestrator | Tuesday 16 September 2025 00:48:23 +0000 (0:00:01.042) 0:00:02.868 ***** 2025-09-16 00:49:39.817252 | orchestrator | changed: [testbed-manager] 2025-09-16 00:49:39.817263 | orchestrator | 2025-09-16 00:49:39.817273 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-16 00:49:39.817284 | orchestrator | Tuesday 16 September 2025 00:48:24 +0000 (0:00:00.729) 0:00:03.598 ***** 2025-09-16 00:49:39.817294 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-16 00:49:39.817305 | orchestrator | 2025-09-16 00:49:39.817316 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-16 00:49:39.817326 | orchestrator | Tuesday 16 September 2025 00:48:25 +0000 (0:00:01.296) 0:00:04.894 ***** 2025-09-16 00:49:39.817337 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-16 00:49:39.817347 | orchestrator | 2025-09-16 00:49:39.817358 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-16 00:49:39.817369 | orchestrator | Tuesday 16 September 2025 00:48:26 +0000 (0:00:00.612) 0:00:05.507 ***** 2025-09-16 00:49:39.817380 | orchestrator | ok: [testbed-manager] 2025-09-16 00:49:39.817391 | orchestrator | 2025-09-16 00:49:39.817402 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-16 00:49:39.817413 | orchestrator | Tuesday 16 September 2025 00:48:26 +0000 (0:00:00.351) 0:00:05.858 ***** 2025-09-16 00:49:39.817423 | orchestrator | ok: [testbed-manager] 2025-09-16 00:49:39.817434 | orchestrator | 2025-09-16 00:49:39.817444 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:49:39.817456 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:49:39.817468 | orchestrator | 2025-09-16 00:49:39.817480 | orchestrator | 2025-09-16 00:49:39.817493 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:49:39.817504 | orchestrator | Tuesday 16 September 2025 00:48:26 +0000 (0:00:00.260) 0:00:06.119 ***** 2025-09-16 00:49:39.817517 | orchestrator | =============================================================================== 2025-09-16 00:49:39.817529 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.30s 2025-09-16 00:49:39.817541 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.04s 2025-09-16 00:49:39.817554 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.73s 2025-09-16 00:49:39.817578 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.64s 2025-09-16 00:49:39.817591 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.61s 2025-09-16 00:49:39.817603 | orchestrator | Create .kube directory -------------------------------------------------- 0.55s 2025-09-16 00:49:39.817615 | orchestrator | Get home directory of operator user ------------------------------------- 0.47s 2025-09-16 00:49:39.817627 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.35s 2025-09-16 00:49:39.817639 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.26s 2025-09-16 00:49:39.817651 | orchestrator | 2025-09-16 00:49:39.817663 | orchestrator | 2025-09-16 00:49:39.817684 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-16 00:49:39.817697 | orchestrator | 2025-09-16 00:49:39.817731 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-16 00:49:39.817743 | orchestrator | Tuesday 16 September 2025 00:47:21 +0000 (0:00:00.308) 0:00:00.308 ***** 2025-09-16 00:49:39.817756 | orchestrator | ok: [localhost] => { 2025-09-16 00:49:39.817769 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-16 00:49:39.817782 | orchestrator | } 2025-09-16 00:49:39.817794 | orchestrator | 2025-09-16 00:49:39.817807 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-16 00:49:39.817819 | orchestrator | Tuesday 16 September 2025 00:47:21 +0000 (0:00:00.084) 0:00:00.393 ***** 2025-09-16 00:49:39.817831 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-16 00:49:39.817843 | orchestrator | ...ignoring 2025-09-16 00:49:39.817855 | orchestrator | 2025-09-16 00:49:39.817866 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-16 00:49:39.817877 | orchestrator | Tuesday 16 September 2025 00:47:24 +0000 (0:00:03.419) 0:00:03.812 ***** 2025-09-16 00:49:39.817887 | orchestrator | skipping: [localhost] 2025-09-16 00:49:39.817898 | orchestrator | 2025-09-16 00:49:39.817909 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-16 00:49:39.817926 | orchestrator | Tuesday 16 September 2025 00:47:25 +0000 (0:00:00.061) 0:00:03.873 ***** 2025-09-16 00:49:39.817937 | orchestrator | ok: [localhost] 2025-09-16 00:49:39.817948 | orchestrator | 2025-09-16 00:49:39.817959 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 00:49:39.817969 | orchestrator | 2025-09-16 00:49:39.817980 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 00:49:39.817991 | orchestrator | Tuesday 16 September 2025 00:47:25 +0000 (0:00:00.243) 0:00:04.117 ***** 2025-09-16 00:49:39.818002 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:49:39.818013 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:49:39.818072 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:49:39.818083 | orchestrator | 2025-09-16 00:49:39.818094 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 00:49:39.818105 | orchestrator | Tuesday 16 September 2025 00:47:25 +0000 (0:00:00.562) 0:00:04.680 ***** 2025-09-16 00:49:39.818116 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-16 00:49:39.818128 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-16 00:49:39.818139 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-16 00:49:39.818150 | orchestrator | 2025-09-16 00:49:39.818160 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-16 00:49:39.818171 | orchestrator | 2025-09-16 00:49:39.818182 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-16 00:49:39.818193 | orchestrator | Tuesday 16 September 2025 00:47:26 +0000 (0:00:00.873) 0:00:05.553 ***** 2025-09-16 00:49:39.818204 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-16 00:49:39.818215 | orchestrator | 2025-09-16 00:49:39.818225 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-16 00:49:39.818236 | orchestrator | Tuesday 16 September 2025 00:47:28 +0000 (0:00:01.303) 0:00:06.857 ***** 2025-09-16 00:49:39.818247 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:49:39.818258 | orchestrator | 2025-09-16 00:49:39.818268 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-16 00:49:39.818279 | orchestrator | Tuesday 16 September 2025 00:47:29 +0000 (0:00:01.079) 0:00:07.936 ***** 2025-09-16 00:49:39.818290 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:49:39.818301 | orchestrator | 2025-09-16 00:49:39.818312 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-16 00:49:39.818331 | orchestrator | Tuesday 16 September 2025 00:47:29 +0000 (0:00:00.739) 0:00:08.675 ***** 2025-09-16 00:49:39.818342 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:49:39.818353 | orchestrator | 2025-09-16 00:49:39.818364 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-16 00:49:39.818375 | orchestrator | Tuesday 16 September 2025 00:47:30 +0000 (0:00:00.493) 0:00:09.168 ***** 2025-09-16 00:49:39.818385 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:49:39.818396 | orchestrator | 2025-09-16 00:49:39.818407 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-16 00:49:39.818418 | orchestrator | Tuesday 16 September 2025 00:47:30 +0000 (0:00:00.469) 0:00:09.638 ***** 2025-09-16 00:49:39.818429 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:49:39.818439 | orchestrator | 2025-09-16 00:49:39.818451 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-16 00:49:39.818461 | orchestrator | Tuesday 16 September 2025 00:47:31 +0000 (0:00:00.408) 0:00:10.046 ***** 2025-09-16 00:49:39.818472 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:49:39.818483 | orchestrator | 2025-09-16 00:49:39.818494 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-16 00:49:39.818513 | orchestrator | Tuesday 16 September 2025 00:47:31 +0000 (0:00:00.736) 0:00:10.783 ***** 2025-09-16 00:49:39.818524 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:49:39.818535 | orchestrator | 2025-09-16 00:49:39.818546 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-16 00:49:39.818556 | orchestrator | Tuesday 16 September 2025 00:47:32 +0000 (0:00:00.802) 0:00:11.585 ***** 2025-09-16 00:49:39.818567 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:49:39.818578 | orchestrator | 2025-09-16 00:49:39.818588 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-16 00:49:39.818599 | orchestrator | Tuesday 16 September 2025 00:47:33 +0000 (0:00:00.336) 0:00:11.922 ***** 2025-09-16 00:49:39.818609 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:49:39.818620 | orchestrator | 2025-09-16 00:49:39.818631 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-16 00:49:39.818641 | orchestrator | Tuesday 16 September 2025 00:47:33 +0000 (0:00:00.349) 0:00:12.272 ***** 2025-09-16 00:49:39.818663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-16 00:49:39.818682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-16 00:49:39.818727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-16 00:49:39.818741 | orchestrator | 2025-09-16 00:49:39.818752 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-16 00:49:39.818763 | orchestrator | Tuesday 16 September 2025 00:47:34 +0000 (0:00:00.788) 0:00:13.060 ***** 2025-09-16 00:49:39.818784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-16 00:49:39.818878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-16 00:49:39.818902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-16 00:49:39.818922 | orchestrator | 2025-09-16 00:49:39.818933 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-16 00:49:39.818944 | orchestrator | Tuesday 16 September 2025 00:47:35 +0000 (0:00:01.711) 0:00:14.772 ***** 2025-09-16 00:49:39.818955 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-16 00:49:39.818966 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-16 00:49:39.818976 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-16 00:49:39.818987 | orchestrator | 2025-09-16 00:49:39.818998 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-16 00:49:39.819008 | orchestrator | Tuesday 16 September 2025 00:47:37 +0000 (0:00:01.970) 0:00:16.742 ***** 2025-09-16 00:49:39.819019 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-16 00:49:39.819030 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-16 00:49:39.819040 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-16 00:49:39.819051 | orchestrator | 2025-09-16 00:49:39.819062 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-16 00:49:39.819080 | orchestrator | Tuesday 16 September 2025 00:47:41 +0000 (0:00:03.422) 0:00:20.165 ***** 2025-09-16 00:49:39.819091 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-16 00:49:39.819102 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-16 00:49:39.819113 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-16 00:49:39.819124 | orchestrator | 2025-09-16 00:49:39.819134 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-16 00:49:39.819145 | orchestrator | Tuesday 16 September 2025 00:47:42 +0000 (0:00:01.570) 0:00:21.735 ***** 2025-09-16 00:49:39.819156 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-16 00:49:39.819167 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-16 00:49:39.819177 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-16 00:49:39.819188 | orchestrator | 2025-09-16 00:49:39.819199 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-16 00:49:39.819210 | orchestrator | Tuesday 16 September 2025 00:47:45 +0000 (0:00:02.779) 0:00:24.514 ***** 2025-09-16 00:49:39.819221 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-16 00:49:39.819231 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-16 00:49:39.819242 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-16 00:49:39.819259 | orchestrator | 2025-09-16 00:49:39.819270 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-16 00:49:39.819281 | orchestrator | Tuesday 16 September 2025 00:47:47 +0000 (0:00:01.595) 0:00:26.109 ***** 2025-09-16 00:49:39.819291 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-16 00:49:39.819307 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-16 00:49:39.819318 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-16 00:49:39.819328 | orchestrator | 2025-09-16 00:49:39.819339 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-16 00:49:39.819350 | orchestrator | Tuesday 16 September 2025 00:47:49 +0000 (0:00:01.960) 0:00:28.070 ***** 2025-09-16 00:49:39.819360 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:49:39.819371 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:49:39.819382 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:49:39.819392 | orchestrator | 2025-09-16 00:49:39.819403 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-16 00:49:39.819414 | orchestrator | Tuesday 16 September 2025 00:47:50 +0000 (0:00:00.772) 0:00:28.843 ***** 2025-09-16 00:49:39.819425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-16 00:49:39.819444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-16 00:49:39.819457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-16 00:49:39.819475 | orchestrator | 2025-09-16 00:49:39.819486 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-16 00:49:39.819496 | orchestrator | Tuesday 16 September 2025 00:47:51 +0000 (0:00:01.669) 0:00:30.512 ***** 2025-09-16 00:49:39.819507 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:49:39.819518 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:49:39.819529 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:49:39.819540 | orchestrator | 2025-09-16 00:49:39.819555 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-16 00:49:39.819566 | orchestrator | Tuesday 16 September 2025 00:47:52 +0000 (0:00:01.032) 0:00:31.545 ***** 2025-09-16 00:49:39.819577 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:49:39.819588 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:49:39.819599 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:49:39.819610 | orchestrator | 2025-09-16 00:49:39.819621 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-16 00:49:39.819631 | orchestrator | Tuesday 16 September 2025 00:48:00 +0000 (0:00:07.577) 0:00:39.123 ***** 2025-09-16 00:49:39.819642 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:49:39.819653 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:49:39.819664 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:49:39.819675 | orchestrator | 2025-09-16 00:49:39.819685 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-16 00:49:39.819696 | orchestrator | 2025-09-16 00:49:39.819805 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-16 00:49:39.819817 | orchestrator | Tuesday 16 September 2025 00:48:01 +0000 (0:00:00.775) 0:00:39.899 ***** 2025-09-16 00:49:39.819828 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:49:39.819838 | orchestrator | 2025-09-16 00:49:39.819849 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-16 00:49:39.819860 | orchestrator | Tuesday 16 September 2025 00:48:01 +0000 (0:00:00.712) 0:00:40.612 ***** 2025-09-16 00:49:39.819870 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:49:39.819881 | orchestrator | 2025-09-16 00:49:39.819892 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-16 00:49:39.819902 | orchestrator | Tuesday 16 September 2025 00:48:02 +0000 (0:00:00.220) 0:00:40.832 ***** 2025-09-16 00:49:39.819913 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:49:39.819923 | orchestrator | 2025-09-16 00:49:39.819934 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-16 00:49:39.819945 | orchestrator | Tuesday 16 September 2025 00:48:08 +0000 (0:00:06.836) 0:00:47.669 ***** 2025-09-16 00:49:39.819955 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:49:39.819966 | orchestrator | 2025-09-16 00:49:39.819976 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-16 00:49:39.819987 | orchestrator | 2025-09-16 00:49:39.819997 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-16 00:49:39.820006 | orchestrator | Tuesday 16 September 2025 00:48:59 +0000 (0:00:50.951) 0:01:38.620 ***** 2025-09-16 00:49:39.820016 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:49:39.820025 | orchestrator | 2025-09-16 00:49:39.820035 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-16 00:49:39.820044 | orchestrator | Tuesday 16 September 2025 00:49:00 +0000 (0:00:00.649) 0:01:39.270 ***** 2025-09-16 00:49:39.820054 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:49:39.820070 | orchestrator | 2025-09-16 00:49:39.820080 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-16 00:49:39.820089 | orchestrator | Tuesday 16 September 2025 00:49:00 +0000 (0:00:00.280) 0:01:39.550 ***** 2025-09-16 00:49:39.820099 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:49:39.820108 | orchestrator | 2025-09-16 00:49:39.820118 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-16 00:49:39.820127 | orchestrator | Tuesday 16 September 2025 00:49:07 +0000 (0:00:06.576) 0:01:46.126 ***** 2025-09-16 00:49:39.820136 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:49:39.820146 | orchestrator | 2025-09-16 00:49:39.820155 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-16 00:49:39.820165 | orchestrator | 2025-09-16 00:49:39.820174 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-16 00:49:39.820184 | orchestrator | Tuesday 16 September 2025 00:49:18 +0000 (0:00:11.664) 0:01:57.791 ***** 2025-09-16 00:49:39.820193 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:49:39.820202 | orchestrator | 2025-09-16 00:49:39.820218 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-16 00:49:39.820227 | orchestrator | Tuesday 16 September 2025 00:49:19 +0000 (0:00:00.587) 0:01:58.378 ***** 2025-09-16 00:49:39.820237 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:49:39.820247 | orchestrator | 2025-09-16 00:49:39.820256 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-16 00:49:39.820266 | orchestrator | Tuesday 16 September 2025 00:49:19 +0000 (0:00:00.278) 0:01:58.656 ***** 2025-09-16 00:49:39.820276 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:49:39.820285 | orchestrator | 2025-09-16 00:49:39.820295 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-16 00:49:39.820305 | orchestrator | Tuesday 16 September 2025 00:49:21 +0000 (0:00:01.522) 0:02:00.179 ***** 2025-09-16 00:49:39.820314 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:49:39.820324 | orchestrator | 2025-09-16 00:49:39.820333 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-16 00:49:39.820343 | orchestrator | 2025-09-16 00:49:39.820352 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-16 00:49:39.820362 | orchestrator | Tuesday 16 September 2025 00:49:36 +0000 (0:00:14.772) 0:02:14.951 ***** 2025-09-16 00:49:39.820372 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:49:39.820381 | orchestrator | 2025-09-16 00:49:39.820391 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-16 00:49:39.820400 | orchestrator | Tuesday 16 September 2025 00:49:36 +0000 (0:00:00.616) 0:02:15.567 ***** 2025-09-16 00:49:39.820410 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-16 00:49:39.820420 | orchestrator | enable_outward_rabbitmq_True 2025-09-16 00:49:39.820429 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-16 00:49:39.820439 | orchestrator | outward_rabbitmq_restart 2025-09-16 00:49:39.820449 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:49:39.820458 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:49:39.820468 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:49:39.820477 | orchestrator | 2025-09-16 00:49:39.820487 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-16 00:49:39.820497 | orchestrator | skipping: no hosts matched 2025-09-16 00:49:39.820506 | orchestrator | 2025-09-16 00:49:39.820521 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-16 00:49:39.820531 | orchestrator | skipping: no hosts matched 2025-09-16 00:49:39.820540 | orchestrator | 2025-09-16 00:49:39.820550 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-16 00:49:39.820560 | orchestrator | skipping: no hosts matched 2025-09-16 00:49:39.820569 | orchestrator | 2025-09-16 00:49:39.820579 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:49:39.820589 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-16 00:49:39.820605 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-16 00:49:39.820615 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:49:39.820625 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:49:39.820634 | orchestrator | 2025-09-16 00:49:39.820644 | orchestrator | 2025-09-16 00:49:39.820653 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:49:39.820663 | orchestrator | Tuesday 16 September 2025 00:49:39 +0000 (0:00:02.498) 0:02:18.065 ***** 2025-09-16 00:49:39.820672 | orchestrator | =============================================================================== 2025-09-16 00:49:39.820681 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 77.39s 2025-09-16 00:49:39.820691 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 14.94s 2025-09-16 00:49:39.820700 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.58s 2025-09-16 00:49:39.820724 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.42s 2025-09-16 00:49:39.820734 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.42s 2025-09-16 00:49:39.820743 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.78s 2025-09-16 00:49:39.820753 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.50s 2025-09-16 00:49:39.820762 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.97s 2025-09-16 00:49:39.820772 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.96s 2025-09-16 00:49:39.820782 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.95s 2025-09-16 00:49:39.820791 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.71s 2025-09-16 00:49:39.820801 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.67s 2025-09-16 00:49:39.820810 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.60s 2025-09-16 00:49:39.820820 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.57s 2025-09-16 00:49:39.820829 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.30s 2025-09-16 00:49:39.820839 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.08s 2025-09-16 00:49:39.820849 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.03s 2025-09-16 00:49:39.820863 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2025-09-16 00:49:39.820873 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.80s 2025-09-16 00:49:39.820882 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.79s 2025-09-16 00:49:39.820892 | orchestrator | 2025-09-16 00:49:39 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:39.820902 | orchestrator | 2025-09-16 00:49:39 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:42.859537 | orchestrator | 2025-09-16 00:49:42 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:42.859643 | orchestrator | 2025-09-16 00:49:42 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:42.860037 | orchestrator | 2025-09-16 00:49:42 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:42.860352 | orchestrator | 2025-09-16 00:49:42 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:45.939803 | orchestrator | 2025-09-16 00:49:45 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:45.940745 | orchestrator | 2025-09-16 00:49:45 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:45.944050 | orchestrator | 2025-09-16 00:49:45 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:45.944096 | orchestrator | 2025-09-16 00:49:45 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:48.978619 | orchestrator | 2025-09-16 00:49:48 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:48.979032 | orchestrator | 2025-09-16 00:49:48 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:48.979830 | orchestrator | 2025-09-16 00:49:48 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:48.980529 | orchestrator | 2025-09-16 00:49:48 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:52.019395 | orchestrator | 2025-09-16 00:49:52 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:52.019917 | orchestrator | 2025-09-16 00:49:52 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:52.020683 | orchestrator | 2025-09-16 00:49:52 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:52.020846 | orchestrator | 2025-09-16 00:49:52 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:55.058341 | orchestrator | 2025-09-16 00:49:55 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:55.059895 | orchestrator | 2025-09-16 00:49:55 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:55.060671 | orchestrator | 2025-09-16 00:49:55 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:55.060829 | orchestrator | 2025-09-16 00:49:55 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:49:58.098205 | orchestrator | 2025-09-16 00:49:58 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:49:58.098307 | orchestrator | 2025-09-16 00:49:58 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:49:58.099217 | orchestrator | 2025-09-16 00:49:58 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:49:58.099242 | orchestrator | 2025-09-16 00:49:58 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:01.140844 | orchestrator | 2025-09-16 00:50:01 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:01.143054 | orchestrator | 2025-09-16 00:50:01 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:50:01.145266 | orchestrator | 2025-09-16 00:50:01 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:01.145289 | orchestrator | 2025-09-16 00:50:01 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:04.189154 | orchestrator | 2025-09-16 00:50:04 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:04.189295 | orchestrator | 2025-09-16 00:50:04 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:50:04.190086 | orchestrator | 2025-09-16 00:50:04 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:04.190196 | orchestrator | 2025-09-16 00:50:04 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:07.245026 | orchestrator | 2025-09-16 00:50:07 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:07.246250 | orchestrator | 2025-09-16 00:50:07 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:50:07.249138 | orchestrator | 2025-09-16 00:50:07 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:07.249160 | orchestrator | 2025-09-16 00:50:07 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:10.297515 | orchestrator | 2025-09-16 00:50:10 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:10.298666 | orchestrator | 2025-09-16 00:50:10 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:50:10.301067 | orchestrator | 2025-09-16 00:50:10 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:10.301317 | orchestrator | 2025-09-16 00:50:10 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:13.349629 | orchestrator | 2025-09-16 00:50:13 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:13.351016 | orchestrator | 2025-09-16 00:50:13 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:50:13.352896 | orchestrator | 2025-09-16 00:50:13 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:13.352920 | orchestrator | 2025-09-16 00:50:13 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:16.397304 | orchestrator | 2025-09-16 00:50:16 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:16.399552 | orchestrator | 2025-09-16 00:50:16 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:50:16.401966 | orchestrator | 2025-09-16 00:50:16 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:16.402005 | orchestrator | 2025-09-16 00:50:16 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:19.461674 | orchestrator | 2025-09-16 00:50:19 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:19.463041 | orchestrator | 2025-09-16 00:50:19 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:50:19.464679 | orchestrator | 2025-09-16 00:50:19 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:19.465030 | orchestrator | 2025-09-16 00:50:19 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:22.518252 | orchestrator | 2025-09-16 00:50:22 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:22.519592 | orchestrator | 2025-09-16 00:50:22 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state STARTED 2025-09-16 00:50:22.521143 | orchestrator | 2025-09-16 00:50:22 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:22.521369 | orchestrator | 2025-09-16 00:50:22 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:25.556033 | orchestrator | 2025-09-16 00:50:25 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:25.559815 | orchestrator | 2025-09-16 00:50:25 | INFO  | Task db2225f2-9c48-443a-8a5d-5f4fc621a498 is in state SUCCESS 2025-09-16 00:50:25.561912 | orchestrator | 2025-09-16 00:50:25.561948 | orchestrator | 2025-09-16 00:50:25.561960 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 00:50:25.561973 | orchestrator | 2025-09-16 00:50:25.561985 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 00:50:25.561997 | orchestrator | Tuesday 16 September 2025 00:48:12 +0000 (0:00:00.143) 0:00:00.143 ***** 2025-09-16 00:50:25.562080 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.562097 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.562108 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.562331 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:50:25.562347 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:50:25.562358 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:50:25.562369 | orchestrator | 2025-09-16 00:50:25.562380 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 00:50:25.562391 | orchestrator | Tuesday 16 September 2025 00:48:13 +0000 (0:00:00.531) 0:00:00.675 ***** 2025-09-16 00:50:25.562402 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-16 00:50:25.562414 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-16 00:50:25.562424 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-16 00:50:25.562435 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-16 00:50:25.562446 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-16 00:50:25.562457 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-16 00:50:25.562468 | orchestrator | 2025-09-16 00:50:25.562479 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-16 00:50:25.562490 | orchestrator | 2025-09-16 00:50:25.562501 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-16 00:50:25.562512 | orchestrator | Tuesday 16 September 2025 00:48:14 +0000 (0:00:00.836) 0:00:01.512 ***** 2025-09-16 00:50:25.562527 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:50:25.562541 | orchestrator | 2025-09-16 00:50:25.562554 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-16 00:50:25.562566 | orchestrator | Tuesday 16 September 2025 00:48:15 +0000 (0:00:01.088) 0:00:02.601 ***** 2025-09-16 00:50:25.562581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.562597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.562624 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.562639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.562651 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.562675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.562688 | orchestrator | 2025-09-16 00:50:25.562742 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-16 00:50:25.562756 | orchestrator | Tuesday 16 September 2025 00:48:16 +0000 (0:00:01.061) 0:00:03.662 ***** 2025-09-16 00:50:25.562770 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563127 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563145 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563157 | orchestrator | 2025-09-16 00:50:25.563168 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-16 00:50:25.563179 | orchestrator | Tuesday 16 September 2025 00:48:18 +0000 (0:00:02.090) 0:00:05.752 ***** 2025-09-16 00:50:25.563190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563243 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563254 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563265 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563276 | orchestrator | 2025-09-16 00:50:25.563287 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-16 00:50:25.563298 | orchestrator | Tuesday 16 September 2025 00:48:19 +0000 (0:00:01.313) 0:00:07.066 ***** 2025-09-16 00:50:25.563309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563366 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563378 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563389 | orchestrator | 2025-09-16 00:50:25.563406 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-16 00:50:25.563417 | orchestrator | Tuesday 16 September 2025 00:48:21 +0000 (0:00:02.194) 0:00:09.261 ***** 2025-09-16 00:50:25.563429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563485 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563513 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.563532 | orchestrator | 2025-09-16 00:50:25.563544 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-16 00:50:25.563556 | orchestrator | Tuesday 16 September 2025 00:48:23 +0000 (0:00:01.536) 0:00:10.797 ***** 2025-09-16 00:50:25.563567 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:50:25.563579 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:50:25.563591 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:50:25.563602 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:50:25.563613 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:50:25.563624 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:50:25.563635 | orchestrator | 2025-09-16 00:50:25.563647 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-16 00:50:25.563658 | orchestrator | Tuesday 16 September 2025 00:48:25 +0000 (0:00:02.700) 0:00:13.498 ***** 2025-09-16 00:50:25.563669 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-16 00:50:25.563681 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-16 00:50:25.563693 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-16 00:50:25.563747 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-16 00:50:25.563762 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-16 00:50:25.563774 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-16 00:50:25.563786 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-16 00:50:25.563798 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-16 00:50:25.563818 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-16 00:50:25.563830 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-16 00:50:25.563843 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-16 00:50:25.563855 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-16 00:50:25.563868 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-16 00:50:25.563882 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-16 00:50:25.563895 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-16 00:50:25.563907 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-16 00:50:25.563919 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-16 00:50:25.563932 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-16 00:50:25.563944 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-16 00:50:25.563958 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-16 00:50:25.563970 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-16 00:50:25.563990 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-16 00:50:25.564003 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-16 00:50:25.564014 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-16 00:50:25.564028 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-16 00:50:25.564040 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-16 00:50:25.564051 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-16 00:50:25.564062 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-16 00:50:25.564072 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-16 00:50:25.564083 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-16 00:50:25.564094 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-16 00:50:25.564105 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-16 00:50:25.564121 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-16 00:50:25.564132 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-16 00:50:25.564143 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-16 00:50:25.564154 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-16 00:50:25.564165 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-16 00:50:25.564176 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-16 00:50:25.564187 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-16 00:50:25.564198 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-16 00:50:25.564208 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-16 00:50:25.564219 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-16 00:50:25.564230 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-16 00:50:25.564242 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-16 00:50:25.564258 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-16 00:50:25.564269 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-16 00:50:25.564280 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-16 00:50:25.564291 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-16 00:50:25.564302 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-16 00:50:25.564313 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-16 00:50:25.564331 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-16 00:50:25.564342 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-16 00:50:25.564353 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-16 00:50:25.564363 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-16 00:50:25.564374 | orchestrator | 2025-09-16 00:50:25.564385 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-16 00:50:25.564396 | orchestrator | Tuesday 16 September 2025 00:48:44 +0000 (0:00:18.353) 0:00:31.852 ***** 2025-09-16 00:50:25.564407 | orchestrator | 2025-09-16 00:50:25.564417 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-16 00:50:25.564428 | orchestrator | Tuesday 16 September 2025 00:48:44 +0000 (0:00:00.218) 0:00:32.071 ***** 2025-09-16 00:50:25.564439 | orchestrator | 2025-09-16 00:50:25.564449 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-16 00:50:25.564460 | orchestrator | Tuesday 16 September 2025 00:48:44 +0000 (0:00:00.064) 0:00:32.135 ***** 2025-09-16 00:50:25.564471 | orchestrator | 2025-09-16 00:50:25.564482 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-16 00:50:25.564492 | orchestrator | Tuesday 16 September 2025 00:48:44 +0000 (0:00:00.084) 0:00:32.219 ***** 2025-09-16 00:50:25.564503 | orchestrator | 2025-09-16 00:50:25.564514 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-16 00:50:25.564525 | orchestrator | Tuesday 16 September 2025 00:48:44 +0000 (0:00:00.077) 0:00:32.296 ***** 2025-09-16 00:50:25.564536 | orchestrator | 2025-09-16 00:50:25.564546 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-16 00:50:25.564557 | orchestrator | Tuesday 16 September 2025 00:48:44 +0000 (0:00:00.064) 0:00:32.361 ***** 2025-09-16 00:50:25.564568 | orchestrator | 2025-09-16 00:50:25.564579 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-16 00:50:25.564589 | orchestrator | Tuesday 16 September 2025 00:48:44 +0000 (0:00:00.065) 0:00:32.426 ***** 2025-09-16 00:50:25.564600 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:50:25.564611 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:50:25.564622 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:50:25.564633 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.564644 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.564655 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.564665 | orchestrator | 2025-09-16 00:50:25.564676 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-16 00:50:25.564692 | orchestrator | Tuesday 16 September 2025 00:48:46 +0000 (0:00:01.935) 0:00:34.362 ***** 2025-09-16 00:50:25.564703 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:50:25.564768 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:50:25.564780 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:50:25.564791 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:50:25.564802 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:50:25.564813 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:50:25.564823 | orchestrator | 2025-09-16 00:50:25.564835 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-16 00:50:25.564845 | orchestrator | 2025-09-16 00:50:25.564856 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-16 00:50:25.564867 | orchestrator | Tuesday 16 September 2025 00:49:14 +0000 (0:00:28.042) 0:01:02.404 ***** 2025-09-16 00:50:25.564878 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:50:25.564888 | orchestrator | 2025-09-16 00:50:25.564899 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-16 00:50:25.564917 | orchestrator | Tuesday 16 September 2025 00:49:15 +0000 (0:00:00.931) 0:01:03.336 ***** 2025-09-16 00:50:25.564928 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:50:25.564939 | orchestrator | 2025-09-16 00:50:25.564950 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-16 00:50:25.564961 | orchestrator | Tuesday 16 September 2025 00:49:16 +0000 (0:00:00.743) 0:01:04.079 ***** 2025-09-16 00:50:25.564971 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.564982 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.564993 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.565003 | orchestrator | 2025-09-16 00:50:25.565014 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-16 00:50:25.565025 | orchestrator | Tuesday 16 September 2025 00:49:17 +0000 (0:00:00.935) 0:01:05.015 ***** 2025-09-16 00:50:25.565036 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.565046 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.565057 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.565074 | orchestrator | 2025-09-16 00:50:25.565085 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-16 00:50:25.565096 | orchestrator | Tuesday 16 September 2025 00:49:17 +0000 (0:00:00.339) 0:01:05.354 ***** 2025-09-16 00:50:25.565106 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.565117 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.565128 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.565139 | orchestrator | 2025-09-16 00:50:25.565149 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-16 00:50:25.565158 | orchestrator | Tuesday 16 September 2025 00:49:18 +0000 (0:00:00.305) 0:01:05.660 ***** 2025-09-16 00:50:25.565168 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.565177 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.565187 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.565196 | orchestrator | 2025-09-16 00:50:25.565206 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-16 00:50:25.565215 | orchestrator | Tuesday 16 September 2025 00:49:18 +0000 (0:00:00.342) 0:01:06.002 ***** 2025-09-16 00:50:25.565225 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.565235 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.565244 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.565254 | orchestrator | 2025-09-16 00:50:25.565263 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-16 00:50:25.565273 | orchestrator | Tuesday 16 September 2025 00:49:19 +0000 (0:00:00.552) 0:01:06.555 ***** 2025-09-16 00:50:25.565282 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.565292 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.565301 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.565311 | orchestrator | 2025-09-16 00:50:25.565320 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-16 00:50:25.565330 | orchestrator | Tuesday 16 September 2025 00:49:19 +0000 (0:00:00.269) 0:01:06.825 ***** 2025-09-16 00:50:25.565339 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.565349 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.565358 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.565368 | orchestrator | 2025-09-16 00:50:25.565377 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-16 00:50:25.565387 | orchestrator | Tuesday 16 September 2025 00:49:19 +0000 (0:00:00.301) 0:01:07.126 ***** 2025-09-16 00:50:25.565396 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.565406 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.565415 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.565425 | orchestrator | 2025-09-16 00:50:25.565434 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-16 00:50:25.565444 | orchestrator | Tuesday 16 September 2025 00:49:19 +0000 (0:00:00.320) 0:01:07.447 ***** 2025-09-16 00:50:25.565453 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.565475 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.565484 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.565494 | orchestrator | 2025-09-16 00:50:25.565503 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-16 00:50:25.565513 | orchestrator | Tuesday 16 September 2025 00:49:20 +0000 (0:00:00.568) 0:01:08.015 ***** 2025-09-16 00:50:25.565522 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.565532 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.565541 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.565551 | orchestrator | 2025-09-16 00:50:25.565560 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-16 00:50:25.565570 | orchestrator | Tuesday 16 September 2025 00:49:20 +0000 (0:00:00.292) 0:01:08.308 ***** 2025-09-16 00:50:25.565579 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.565589 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.565598 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.565608 | orchestrator | 2025-09-16 00:50:25.565618 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-16 00:50:25.565627 | orchestrator | Tuesday 16 September 2025 00:49:21 +0000 (0:00:00.270) 0:01:08.578 ***** 2025-09-16 00:50:25.565637 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.565646 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.565660 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.565670 | orchestrator | 2025-09-16 00:50:25.565680 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-16 00:50:25.565690 | orchestrator | Tuesday 16 September 2025 00:49:21 +0000 (0:00:00.301) 0:01:08.880 ***** 2025-09-16 00:50:25.565699 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.565725 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.565735 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.565745 | orchestrator | 2025-09-16 00:50:25.565754 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-16 00:50:25.565764 | orchestrator | Tuesday 16 September 2025 00:49:21 +0000 (0:00:00.319) 0:01:09.200 ***** 2025-09-16 00:50:25.565773 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.565783 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.565792 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.565802 | orchestrator | 2025-09-16 00:50:25.565811 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-16 00:50:25.565821 | orchestrator | Tuesday 16 September 2025 00:49:22 +0000 (0:00:00.702) 0:01:09.902 ***** 2025-09-16 00:50:25.565830 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.565840 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.565849 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.565858 | orchestrator | 2025-09-16 00:50:25.565868 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-16 00:50:25.565877 | orchestrator | Tuesday 16 September 2025 00:49:22 +0000 (0:00:00.372) 0:01:10.274 ***** 2025-09-16 00:50:25.565887 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.565896 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.565905 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.565915 | orchestrator | 2025-09-16 00:50:25.565924 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-16 00:50:25.565934 | orchestrator | Tuesday 16 September 2025 00:49:23 +0000 (0:00:00.576) 0:01:10.851 ***** 2025-09-16 00:50:25.565943 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.565953 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.565968 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.565978 | orchestrator | 2025-09-16 00:50:25.565988 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-16 00:50:25.565997 | orchestrator | Tuesday 16 September 2025 00:49:23 +0000 (0:00:00.409) 0:01:11.260 ***** 2025-09-16 00:50:25.566007 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:50:25.566050 | orchestrator | 2025-09-16 00:50:25.566062 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-16 00:50:25.566071 | orchestrator | Tuesday 16 September 2025 00:49:24 +0000 (0:00:00.914) 0:01:12.175 ***** 2025-09-16 00:50:25.566081 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.566091 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.566100 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.566110 | orchestrator | 2025-09-16 00:50:25.566119 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-16 00:50:25.566129 | orchestrator | Tuesday 16 September 2025 00:49:25 +0000 (0:00:00.436) 0:01:12.611 ***** 2025-09-16 00:50:25.566138 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.566148 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.566157 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.566166 | orchestrator | 2025-09-16 00:50:25.566176 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-16 00:50:25.566186 | orchestrator | Tuesday 16 September 2025 00:49:25 +0000 (0:00:00.450) 0:01:13.062 ***** 2025-09-16 00:50:25.566195 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.566205 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.566214 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.566224 | orchestrator | 2025-09-16 00:50:25.566233 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-16 00:50:25.566243 | orchestrator | Tuesday 16 September 2025 00:49:26 +0000 (0:00:00.659) 0:01:13.721 ***** 2025-09-16 00:50:25.566252 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.566262 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.566272 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.566281 | orchestrator | 2025-09-16 00:50:25.566291 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-16 00:50:25.566300 | orchestrator | Tuesday 16 September 2025 00:49:26 +0000 (0:00:00.307) 0:01:14.029 ***** 2025-09-16 00:50:25.566310 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.566319 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.566329 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.566338 | orchestrator | 2025-09-16 00:50:25.566348 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-16 00:50:25.566357 | orchestrator | Tuesday 16 September 2025 00:49:26 +0000 (0:00:00.393) 0:01:14.422 ***** 2025-09-16 00:50:25.566367 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.566376 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.566386 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.566396 | orchestrator | 2025-09-16 00:50:25.566406 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-16 00:50:25.566415 | orchestrator | Tuesday 16 September 2025 00:49:27 +0000 (0:00:00.428) 0:01:14.851 ***** 2025-09-16 00:50:25.566425 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.566435 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.566444 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.566454 | orchestrator | 2025-09-16 00:50:25.566463 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-16 00:50:25.566473 | orchestrator | Tuesday 16 September 2025 00:49:27 +0000 (0:00:00.481) 0:01:15.333 ***** 2025-09-16 00:50:25.566482 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.566492 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.566501 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.566511 | orchestrator | 2025-09-16 00:50:25.566520 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-16 00:50:25.566530 | orchestrator | Tuesday 16 September 2025 00:49:28 +0000 (0:00:00.302) 0:01:15.635 ***** 2025-09-16 00:50:25.566545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566651 | orchestrator | 2025-09-16 00:50:25.566661 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-16 00:50:25.566671 | orchestrator | Tuesday 16 September 2025 00:49:29 +0000 (0:00:01.377) 0:01:17.013 ***** 2025-09-16 00:50:25.566681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566875 | orchestrator | 2025-09-16 00:50:25.566884 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-16 00:50:25.566894 | orchestrator | Tuesday 16 September 2025 00:49:33 +0000 (0:00:04.140) 0:01:21.153 ***** 2025-09-16 00:50:25.566904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.566997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567016 | orchestrator | 2025-09-16 00:50:25.567026 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-16 00:50:25.567036 | orchestrator | Tuesday 16 September 2025 00:49:35 +0000 (0:00:02.105) 0:01:23.259 ***** 2025-09-16 00:50:25.567045 | orchestrator | 2025-09-16 00:50:25.567055 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-16 00:50:25.567065 | orchestrator | Tuesday 16 September 2025 00:49:36 +0000 (0:00:00.266) 0:01:23.525 ***** 2025-09-16 00:50:25.567074 | orchestrator | 2025-09-16 00:50:25.567084 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-16 00:50:25.567099 | orchestrator | Tuesday 16 September 2025 00:49:36 +0000 (0:00:00.072) 0:01:23.597 ***** 2025-09-16 00:50:25.567109 | orchestrator | 2025-09-16 00:50:25.567118 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-16 00:50:25.567128 | orchestrator | Tuesday 16 September 2025 00:49:36 +0000 (0:00:00.066) 0:01:23.664 ***** 2025-09-16 00:50:25.567137 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:50:25.567147 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:50:25.567156 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:50:25.567166 | orchestrator | 2025-09-16 00:50:25.567175 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-16 00:50:25.567185 | orchestrator | Tuesday 16 September 2025 00:49:38 +0000 (0:00:02.541) 0:01:26.206 ***** 2025-09-16 00:50:25.567194 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:50:25.567204 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:50:25.567213 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:50:25.567223 | orchestrator | 2025-09-16 00:50:25.567232 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-16 00:50:25.567242 | orchestrator | Tuesday 16 September 2025 00:49:45 +0000 (0:00:06.586) 0:01:32.793 ***** 2025-09-16 00:50:25.567251 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:50:25.567261 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:50:25.567270 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:50:25.567280 | orchestrator | 2025-09-16 00:50:25.567289 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-16 00:50:25.567303 | orchestrator | Tuesday 16 September 2025 00:49:47 +0000 (0:00:02.567) 0:01:35.360 ***** 2025-09-16 00:50:25.567313 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.567322 | orchestrator | 2025-09-16 00:50:25.567332 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-16 00:50:25.567341 | orchestrator | Tuesday 16 September 2025 00:49:48 +0000 (0:00:00.331) 0:01:35.691 ***** 2025-09-16 00:50:25.567351 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.567360 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.567370 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.567380 | orchestrator | 2025-09-16 00:50:25.567389 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-16 00:50:25.567398 | orchestrator | Tuesday 16 September 2025 00:49:48 +0000 (0:00:00.721) 0:01:36.413 ***** 2025-09-16 00:50:25.567408 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.567418 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.567427 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:50:25.567437 | orchestrator | 2025-09-16 00:50:25.567446 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-16 00:50:25.567456 | orchestrator | Tuesday 16 September 2025 00:49:49 +0000 (0:00:00.592) 0:01:37.006 ***** 2025-09-16 00:50:25.567465 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.567475 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.567484 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.567494 | orchestrator | 2025-09-16 00:50:25.567503 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-16 00:50:25.567513 | orchestrator | Tuesday 16 September 2025 00:49:50 +0000 (0:00:00.732) 0:01:37.738 ***** 2025-09-16 00:50:25.567522 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.567532 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.567541 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:50:25.567551 | orchestrator | 2025-09-16 00:50:25.567560 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-16 00:50:25.567570 | orchestrator | Tuesday 16 September 2025 00:49:50 +0000 (0:00:00.551) 0:01:38.289 ***** 2025-09-16 00:50:25.567579 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.567589 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.567603 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.567613 | orchestrator | 2025-09-16 00:50:25.567622 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-16 00:50:25.567638 | orchestrator | Tuesday 16 September 2025 00:49:51 +0000 (0:00:00.945) 0:01:39.235 ***** 2025-09-16 00:50:25.567648 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.567658 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.567667 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.567677 | orchestrator | 2025-09-16 00:50:25.567686 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-16 00:50:25.567696 | orchestrator | Tuesday 16 September 2025 00:49:52 +0000 (0:00:00.688) 0:01:39.923 ***** 2025-09-16 00:50:25.567705 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.567731 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.567741 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.567750 | orchestrator | 2025-09-16 00:50:25.567759 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-16 00:50:25.567769 | orchestrator | Tuesday 16 September 2025 00:49:52 +0000 (0:00:00.279) 0:01:40.202 ***** 2025-09-16 00:50:25.567779 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567789 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567799 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567809 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567819 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567834 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567844 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567854 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567882 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567893 | orchestrator | 2025-09-16 00:50:25.567903 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-16 00:50:25.567913 | orchestrator | Tuesday 16 September 2025 00:49:54 +0000 (0:00:01.545) 0:01:41.748 ***** 2025-09-16 00:50:25.567923 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567933 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567943 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567953 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567988 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.567999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.568014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.568024 | orchestrator | 2025-09-16 00:50:25.568034 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-16 00:50:25.568044 | orchestrator | Tuesday 16 September 2025 00:49:58 +0000 (0:00:04.230) 0:01:45.979 ***** 2025-09-16 00:50:25.568059 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.568069 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.568079 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.568089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.568099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.568109 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.568119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.568134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.568150 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 00:50:25.568160 | orchestrator | 2025-09-16 00:50:25.568170 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-16 00:50:25.568179 | orchestrator | Tuesday 16 September 2025 00:50:01 +0000 (0:00:02.702) 0:01:48.682 ***** 2025-09-16 00:50:25.568189 | orchestrator | 2025-09-16 00:50:25.568199 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-16 00:50:25.568209 | orchestrator | Tuesday 16 September 2025 00:50:01 +0000 (0:00:00.067) 0:01:48.749 ***** 2025-09-16 00:50:25.568218 | orchestrator | 2025-09-16 00:50:25.568228 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-16 00:50:25.568237 | orchestrator | Tuesday 16 September 2025 00:50:01 +0000 (0:00:00.062) 0:01:48.811 ***** 2025-09-16 00:50:25.568247 | orchestrator | 2025-09-16 00:50:25.568257 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-16 00:50:25.568266 | orchestrator | Tuesday 16 September 2025 00:50:01 +0000 (0:00:00.065) 0:01:48.877 ***** 2025-09-16 00:50:25.568276 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:50:25.568286 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:50:25.568296 | orchestrator | 2025-09-16 00:50:25.568310 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-16 00:50:25.568320 | orchestrator | Tuesday 16 September 2025 00:50:07 +0000 (0:00:06.058) 0:01:54.935 ***** 2025-09-16 00:50:25.568329 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:50:25.568339 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:50:25.568348 | orchestrator | 2025-09-16 00:50:25.568358 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-16 00:50:25.568368 | orchestrator | Tuesday 16 September 2025 00:50:13 +0000 (0:00:06.049) 0:02:00.984 ***** 2025-09-16 00:50:25.568378 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:50:25.568388 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:50:25.568397 | orchestrator | 2025-09-16 00:50:25.568407 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-16 00:50:25.568416 | orchestrator | Tuesday 16 September 2025 00:50:20 +0000 (0:00:06.649) 0:02:07.634 ***** 2025-09-16 00:50:25.568426 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:50:25.568435 | orchestrator | 2025-09-16 00:50:25.568445 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-16 00:50:25.568454 | orchestrator | Tuesday 16 September 2025 00:50:20 +0000 (0:00:00.137) 0:02:07.771 ***** 2025-09-16 00:50:25.568464 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.568474 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.568483 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.568493 | orchestrator | 2025-09-16 00:50:25.568502 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-16 00:50:25.568512 | orchestrator | Tuesday 16 September 2025 00:50:21 +0000 (0:00:00.811) 0:02:08.583 ***** 2025-09-16 00:50:25.568522 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.568531 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.568541 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:50:25.568551 | orchestrator | 2025-09-16 00:50:25.568560 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-16 00:50:25.568570 | orchestrator | Tuesday 16 September 2025 00:50:21 +0000 (0:00:00.690) 0:02:09.273 ***** 2025-09-16 00:50:25.568579 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.568589 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.568599 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.568608 | orchestrator | 2025-09-16 00:50:25.568624 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-16 00:50:25.568634 | orchestrator | Tuesday 16 September 2025 00:50:22 +0000 (0:00:00.914) 0:02:10.188 ***** 2025-09-16 00:50:25.568643 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:50:25.568653 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:50:25.568663 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:50:25.568672 | orchestrator | 2025-09-16 00:50:25.568682 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-16 00:50:25.568692 | orchestrator | Tuesday 16 September 2025 00:50:23 +0000 (0:00:00.657) 0:02:10.846 ***** 2025-09-16 00:50:25.568701 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.568727 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.568736 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.568746 | orchestrator | 2025-09-16 00:50:25.568756 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-16 00:50:25.568766 | orchestrator | Tuesday 16 September 2025 00:50:24 +0000 (0:00:00.715) 0:02:11.562 ***** 2025-09-16 00:50:25.568775 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:50:25.568785 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:50:25.568795 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:50:25.568804 | orchestrator | 2025-09-16 00:50:25.568814 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:50:25.568824 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-16 00:50:25.568835 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-16 00:50:25.568849 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-16 00:50:25.568860 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:50:25.568870 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:50:25.568879 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:50:25.568889 | orchestrator | 2025-09-16 00:50:25.568899 | orchestrator | 2025-09-16 00:50:25.568909 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:50:25.568918 | orchestrator | Tuesday 16 September 2025 00:50:24 +0000 (0:00:00.891) 0:02:12.453 ***** 2025-09-16 00:50:25.568928 | orchestrator | =============================================================================== 2025-09-16 00:50:25.568938 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 28.04s 2025-09-16 00:50:25.568948 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.35s 2025-09-16 00:50:25.568957 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 12.64s 2025-09-16 00:50:25.568967 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.22s 2025-09-16 00:50:25.568976 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.60s 2025-09-16 00:50:25.568986 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.23s 2025-09-16 00:50:25.568996 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.14s 2025-09-16 00:50:25.569010 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.70s 2025-09-16 00:50:25.569020 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.70s 2025-09-16 00:50:25.569030 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.19s 2025-09-16 00:50:25.569040 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.11s 2025-09-16 00:50:25.569058 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.09s 2025-09-16 00:50:25.569068 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.94s 2025-09-16 00:50:25.569078 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.55s 2025-09-16 00:50:25.569087 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.54s 2025-09-16 00:50:25.569097 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.38s 2025-09-16 00:50:25.569107 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.31s 2025-09-16 00:50:25.569116 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.09s 2025-09-16 00:50:25.569126 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.06s 2025-09-16 00:50:25.569136 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 0.95s 2025-09-16 00:50:25.569146 | orchestrator | 2025-09-16 00:50:25 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:25.569156 | orchestrator | 2025-09-16 00:50:25 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:28.619417 | orchestrator | 2025-09-16 00:50:28 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:28.621186 | orchestrator | 2025-09-16 00:50:28 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:28.621232 | orchestrator | 2025-09-16 00:50:28 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:31.664140 | orchestrator | 2025-09-16 00:50:31 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:31.667664 | orchestrator | 2025-09-16 00:50:31 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:31.670316 | orchestrator | 2025-09-16 00:50:31 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:34.721119 | orchestrator | 2025-09-16 00:50:34 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:34.722480 | orchestrator | 2025-09-16 00:50:34 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:34.722513 | orchestrator | 2025-09-16 00:50:34 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:37.764528 | orchestrator | 2025-09-16 00:50:37 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:37.765219 | orchestrator | 2025-09-16 00:50:37 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:37.765416 | orchestrator | 2025-09-16 00:50:37 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:40.812301 | orchestrator | 2025-09-16 00:50:40 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:40.813939 | orchestrator | 2025-09-16 00:50:40 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:40.814407 | orchestrator | 2025-09-16 00:50:40 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:43.846285 | orchestrator | 2025-09-16 00:50:43 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:43.848248 | orchestrator | 2025-09-16 00:50:43 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:43.848869 | orchestrator | 2025-09-16 00:50:43 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:46.885024 | orchestrator | 2025-09-16 00:50:46 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:46.885631 | orchestrator | 2025-09-16 00:50:46 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:46.885699 | orchestrator | 2025-09-16 00:50:46 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:49.918977 | orchestrator | 2025-09-16 00:50:49 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:49.921257 | orchestrator | 2025-09-16 00:50:49 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:49.921287 | orchestrator | 2025-09-16 00:50:49 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:52.953479 | orchestrator | 2025-09-16 00:50:52 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:52.953642 | orchestrator | 2025-09-16 00:50:52 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:52.953668 | orchestrator | 2025-09-16 00:50:52 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:56.002542 | orchestrator | 2025-09-16 00:50:55 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:56.004999 | orchestrator | 2025-09-16 00:50:56 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:56.005055 | orchestrator | 2025-09-16 00:50:56 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:50:59.050540 | orchestrator | 2025-09-16 00:50:59 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:50:59.051034 | orchestrator | 2025-09-16 00:50:59 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:50:59.051069 | orchestrator | 2025-09-16 00:50:59 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:02.094301 | orchestrator | 2025-09-16 00:51:02 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:02.095204 | orchestrator | 2025-09-16 00:51:02 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:02.095612 | orchestrator | 2025-09-16 00:51:02 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:05.142956 | orchestrator | 2025-09-16 00:51:05 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:05.144236 | orchestrator | 2025-09-16 00:51:05 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:05.144409 | orchestrator | 2025-09-16 00:51:05 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:08.185710 | orchestrator | 2025-09-16 00:51:08 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:08.186184 | orchestrator | 2025-09-16 00:51:08 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:08.186219 | orchestrator | 2025-09-16 00:51:08 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:11.228941 | orchestrator | 2025-09-16 00:51:11 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:11.230987 | orchestrator | 2025-09-16 00:51:11 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:11.231300 | orchestrator | 2025-09-16 00:51:11 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:14.280917 | orchestrator | 2025-09-16 00:51:14 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:14.283001 | orchestrator | 2025-09-16 00:51:14 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:14.283364 | orchestrator | 2025-09-16 00:51:14 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:17.326108 | orchestrator | 2025-09-16 00:51:17 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:17.328600 | orchestrator | 2025-09-16 00:51:17 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:17.329317 | orchestrator | 2025-09-16 00:51:17 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:20.375487 | orchestrator | 2025-09-16 00:51:20 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:20.376650 | orchestrator | 2025-09-16 00:51:20 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:20.376851 | orchestrator | 2025-09-16 00:51:20 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:23.423430 | orchestrator | 2025-09-16 00:51:23 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:23.425839 | orchestrator | 2025-09-16 00:51:23 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:23.426225 | orchestrator | 2025-09-16 00:51:23 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:26.482462 | orchestrator | 2025-09-16 00:51:26 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:26.483231 | orchestrator | 2025-09-16 00:51:26 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:26.483977 | orchestrator | 2025-09-16 00:51:26 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:29.529447 | orchestrator | 2025-09-16 00:51:29 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:29.530562 | orchestrator | 2025-09-16 00:51:29 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:29.530596 | orchestrator | 2025-09-16 00:51:29 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:32.562009 | orchestrator | 2025-09-16 00:51:32 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:32.562862 | orchestrator | 2025-09-16 00:51:32 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:32.562890 | orchestrator | 2025-09-16 00:51:32 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:35.611789 | orchestrator | 2025-09-16 00:51:35 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:35.611892 | orchestrator | 2025-09-16 00:51:35 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:35.611908 | orchestrator | 2025-09-16 00:51:35 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:38.658110 | orchestrator | 2025-09-16 00:51:38 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:38.661123 | orchestrator | 2025-09-16 00:51:38 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:38.661431 | orchestrator | 2025-09-16 00:51:38 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:41.712714 | orchestrator | 2025-09-16 00:51:41 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:41.714814 | orchestrator | 2025-09-16 00:51:41 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:41.714848 | orchestrator | 2025-09-16 00:51:41 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:44.749163 | orchestrator | 2025-09-16 00:51:44 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:44.750789 | orchestrator | 2025-09-16 00:51:44 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:44.750827 | orchestrator | 2025-09-16 00:51:44 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:47.788450 | orchestrator | 2025-09-16 00:51:47 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:47.789048 | orchestrator | 2025-09-16 00:51:47 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:47.789079 | orchestrator | 2025-09-16 00:51:47 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:50.831246 | orchestrator | 2025-09-16 00:51:50 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:50.832388 | orchestrator | 2025-09-16 00:51:50 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:50.832653 | orchestrator | 2025-09-16 00:51:50 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:53.882013 | orchestrator | 2025-09-16 00:51:53 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:53.886464 | orchestrator | 2025-09-16 00:51:53 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:53.886515 | orchestrator | 2025-09-16 00:51:53 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:56.926294 | orchestrator | 2025-09-16 00:51:56 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:56.928375 | orchestrator | 2025-09-16 00:51:56 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:56.928404 | orchestrator | 2025-09-16 00:51:56 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:51:59.963889 | orchestrator | 2025-09-16 00:51:59 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:51:59.964596 | orchestrator | 2025-09-16 00:51:59 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:51:59.964634 | orchestrator | 2025-09-16 00:51:59 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:03.022204 | orchestrator | 2025-09-16 00:52:03 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:03.023935 | orchestrator | 2025-09-16 00:52:03 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:03.023983 | orchestrator | 2025-09-16 00:52:03 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:06.065696 | orchestrator | 2025-09-16 00:52:06 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:06.065866 | orchestrator | 2025-09-16 00:52:06 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:06.065884 | orchestrator | 2025-09-16 00:52:06 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:09.105335 | orchestrator | 2025-09-16 00:52:09 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:09.107424 | orchestrator | 2025-09-16 00:52:09 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:09.107563 | orchestrator | 2025-09-16 00:52:09 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:12.148418 | orchestrator | 2025-09-16 00:52:12 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:12.149947 | orchestrator | 2025-09-16 00:52:12 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:12.150306 | orchestrator | 2025-09-16 00:52:12 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:15.178871 | orchestrator | 2025-09-16 00:52:15 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:15.180462 | orchestrator | 2025-09-16 00:52:15 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:15.180791 | orchestrator | 2025-09-16 00:52:15 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:18.218882 | orchestrator | 2025-09-16 00:52:18 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:18.221278 | orchestrator | 2025-09-16 00:52:18 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:18.221323 | orchestrator | 2025-09-16 00:52:18 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:21.268636 | orchestrator | 2025-09-16 00:52:21 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:21.270523 | orchestrator | 2025-09-16 00:52:21 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:21.270846 | orchestrator | 2025-09-16 00:52:21 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:24.315159 | orchestrator | 2025-09-16 00:52:24 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:24.316142 | orchestrator | 2025-09-16 00:52:24 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:24.316177 | orchestrator | 2025-09-16 00:52:24 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:27.357446 | orchestrator | 2025-09-16 00:52:27 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:27.358483 | orchestrator | 2025-09-16 00:52:27 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:27.358843 | orchestrator | 2025-09-16 00:52:27 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:30.395945 | orchestrator | 2025-09-16 00:52:30 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:30.397545 | orchestrator | 2025-09-16 00:52:30 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:30.397597 | orchestrator | 2025-09-16 00:52:30 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:33.431998 | orchestrator | 2025-09-16 00:52:33 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:33.433251 | orchestrator | 2025-09-16 00:52:33 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:33.433737 | orchestrator | 2025-09-16 00:52:33 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:36.483721 | orchestrator | 2025-09-16 00:52:36 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:36.485507 | orchestrator | 2025-09-16 00:52:36 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:36.485614 | orchestrator | 2025-09-16 00:52:36 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:39.522658 | orchestrator | 2025-09-16 00:52:39 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:39.524989 | orchestrator | 2025-09-16 00:52:39 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:39.525479 | orchestrator | 2025-09-16 00:52:39 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:42.572508 | orchestrator | 2025-09-16 00:52:42 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:42.574464 | orchestrator | 2025-09-16 00:52:42 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:42.574501 | orchestrator | 2025-09-16 00:52:42 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:45.605709 | orchestrator | 2025-09-16 00:52:45 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:45.606363 | orchestrator | 2025-09-16 00:52:45 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:45.606393 | orchestrator | 2025-09-16 00:52:45 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:48.646877 | orchestrator | 2025-09-16 00:52:48 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:48.648935 | orchestrator | 2025-09-16 00:52:48 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:48.649386 | orchestrator | 2025-09-16 00:52:48 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:51.695936 | orchestrator | 2025-09-16 00:52:51 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:51.697134 | orchestrator | 2025-09-16 00:52:51 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:51.697293 | orchestrator | 2025-09-16 00:52:51 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:54.737098 | orchestrator | 2025-09-16 00:52:54 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:54.737417 | orchestrator | 2025-09-16 00:52:54 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:54.737441 | orchestrator | 2025-09-16 00:52:54 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:52:57.863448 | orchestrator | 2025-09-16 00:52:57 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:52:57.863558 | orchestrator | 2025-09-16 00:52:57 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:52:57.863574 | orchestrator | 2025-09-16 00:52:57 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:00.904225 | orchestrator | 2025-09-16 00:53:00 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:00.905215 | orchestrator | 2025-09-16 00:53:00 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:53:00.905410 | orchestrator | 2025-09-16 00:53:00 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:03.949168 | orchestrator | 2025-09-16 00:53:03 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:03.952123 | orchestrator | 2025-09-16 00:53:03 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:53:03.952157 | orchestrator | 2025-09-16 00:53:03 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:07.013429 | orchestrator | 2025-09-16 00:53:07 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:07.014931 | orchestrator | 2025-09-16 00:53:07 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:53:07.014961 | orchestrator | 2025-09-16 00:53:07 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:10.062337 | orchestrator | 2025-09-16 00:53:10 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:10.062475 | orchestrator | 2025-09-16 00:53:10 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state STARTED 2025-09-16 00:53:10.063003 | orchestrator | 2025-09-16 00:53:10 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:13.103402 | orchestrator | 2025-09-16 00:53:13 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:13.114936 | orchestrator | 2025-09-16 00:53:13 | INFO  | Task 848e07bf-3a9e-4e73-90fb-2b23823f62ac is in state SUCCESS 2025-09-16 00:53:13.115963 | orchestrator | 2025-09-16 00:53:13.118295 | orchestrator | 2025-09-16 00:53:13.118342 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 00:53:13.118396 | orchestrator | 2025-09-16 00:53:13.118417 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 00:53:13.118436 | orchestrator | Tuesday 16 September 2025 00:47:06 +0000 (0:00:00.274) 0:00:00.274 ***** 2025-09-16 00:53:13.118454 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.118476 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.118490 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.118501 | orchestrator | 2025-09-16 00:53:13.118512 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 00:53:13.118524 | orchestrator | Tuesday 16 September 2025 00:47:06 +0000 (0:00:00.310) 0:00:00.585 ***** 2025-09-16 00:53:13.118535 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-16 00:53:13.118546 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-16 00:53:13.118557 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-16 00:53:13.118568 | orchestrator | 2025-09-16 00:53:13.118579 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-16 00:53:13.118590 | orchestrator | 2025-09-16 00:53:13.118806 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-16 00:53:13.118818 | orchestrator | Tuesday 16 September 2025 00:47:06 +0000 (0:00:00.513) 0:00:01.099 ***** 2025-09-16 00:53:13.118830 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.118841 | orchestrator | 2025-09-16 00:53:13.118851 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-16 00:53:13.118863 | orchestrator | Tuesday 16 September 2025 00:47:07 +0000 (0:00:00.679) 0:00:01.778 ***** 2025-09-16 00:53:13.118873 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.118884 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.118897 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.118911 | orchestrator | 2025-09-16 00:53:13.118924 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-16 00:53:13.118936 | orchestrator | Tuesday 16 September 2025 00:47:08 +0000 (0:00:00.787) 0:00:02.566 ***** 2025-09-16 00:53:13.118948 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.118960 | orchestrator | 2025-09-16 00:53:13.118972 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-16 00:53:13.118985 | orchestrator | Tuesday 16 September 2025 00:47:09 +0000 (0:00:01.027) 0:00:03.593 ***** 2025-09-16 00:53:13.118997 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.119009 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.119022 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.119034 | orchestrator | 2025-09-16 00:53:13.119046 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-16 00:53:13.119059 | orchestrator | Tuesday 16 September 2025 00:47:10 +0000 (0:00:01.508) 0:00:05.102 ***** 2025-09-16 00:53:13.119071 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-16 00:53:13.120076 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-16 00:53:13.120093 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-16 00:53:13.120105 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-16 00:53:13.120115 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-16 00:53:13.120127 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-16 00:53:13.120138 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-16 00:53:13.120149 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-16 00:53:13.120159 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-16 00:53:13.120220 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-16 00:53:13.120258 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-16 00:53:13.120272 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-16 00:53:13.120283 | orchestrator | 2025-09-16 00:53:13.120293 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-16 00:53:13.120305 | orchestrator | Tuesday 16 September 2025 00:47:14 +0000 (0:00:03.321) 0:00:08.424 ***** 2025-09-16 00:53:13.120316 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-16 00:53:13.120327 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-16 00:53:13.120393 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-16 00:53:13.120405 | orchestrator | 2025-09-16 00:53:13.120423 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-16 00:53:13.120463 | orchestrator | Tuesday 16 September 2025 00:47:15 +0000 (0:00:01.014) 0:00:09.438 ***** 2025-09-16 00:53:13.120475 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-16 00:53:13.120546 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-16 00:53:13.120558 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-16 00:53:13.120569 | orchestrator | 2025-09-16 00:53:13.120580 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-16 00:53:13.120591 | orchestrator | Tuesday 16 September 2025 00:47:16 +0000 (0:00:01.554) 0:00:10.993 ***** 2025-09-16 00:53:13.120602 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-16 00:53:13.120678 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.120707 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-16 00:53:13.120720 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.120733 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-16 00:53:13.120745 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.120780 | orchestrator | 2025-09-16 00:53:13.120794 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-16 00:53:13.120807 | orchestrator | Tuesday 16 September 2025 00:47:17 +0000 (0:00:00.921) 0:00:11.915 ***** 2025-09-16 00:53:13.120823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-16 00:53:13.120842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-16 00:53:13.120854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-16 00:53:13.120875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-16 00:53:13.120887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-16 00:53:13.120974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-16 00:53:13.120989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-16 00:53:13.121002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-16 00:53:13.121013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-16 00:53:13.121025 | orchestrator | 2025-09-16 00:53:13.121068 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-16 00:53:13.121081 | orchestrator | Tuesday 16 September 2025 00:47:19 +0000 (0:00:02.210) 0:00:14.125 ***** 2025-09-16 00:53:13.121099 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.121111 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.121122 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.121133 | orchestrator | 2025-09-16 00:53:13.121143 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-16 00:53:13.121155 | orchestrator | Tuesday 16 September 2025 00:47:21 +0000 (0:00:01.939) 0:00:16.064 ***** 2025-09-16 00:53:13.121166 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-16 00:53:13.121177 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-16 00:53:13.121187 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-16 00:53:13.121198 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-16 00:53:13.121235 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-16 00:53:13.121247 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-16 00:53:13.121257 | orchestrator | 2025-09-16 00:53:13.121268 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-16 00:53:13.121279 | orchestrator | Tuesday 16 September 2025 00:47:24 +0000 (0:00:02.196) 0:00:18.261 ***** 2025-09-16 00:53:13.121290 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.121301 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.121311 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.121346 | orchestrator | 2025-09-16 00:53:13.121358 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-16 00:53:13.121369 | orchestrator | Tuesday 16 September 2025 00:47:25 +0000 (0:00:00.912) 0:00:19.173 ***** 2025-09-16 00:53:13.121380 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.121391 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.121402 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.121412 | orchestrator | 2025-09-16 00:53:13.121423 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-16 00:53:13.121457 | orchestrator | Tuesday 16 September 2025 00:47:27 +0000 (0:00:02.267) 0:00:21.441 ***** 2025-09-16 00:53:13.121475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.121604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.121619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.121639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.121651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0b1e76d9e584ec6b606e60c217d6cc3f16e4b305', '__omit_place_holder__0b1e76d9e584ec6b606e60c217d6cc3f16e4b305'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-16 00:53:13.121663 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.121674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.121686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.121703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0b1e76d9e584ec6b606e60c217d6cc3f16e4b305', '__omit_place_holder__0b1e76d9e584ec6b606e60c217d6cc3f16e4b305'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-16 00:53:13.121714 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.121735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.121747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.121832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.121847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0b1e76d9e584ec6b606e60c217d6cc3f16e4b305', '__omit_place_holder__0b1e76d9e584ec6b606e60c217d6cc3f16e4b305'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-16 00:53:13.121858 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.121869 | orchestrator | 2025-09-16 00:53:13.121880 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-16 00:53:13.121891 | orchestrator | Tuesday 16 September 2025 00:47:28 +0000 (0:00:01.463) 0:00:22.905 ***** 2025-09-16 00:53:13.121903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-16 00:53:13.121915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-16 00:53:13.121936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-16 00:53:13.121953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-16 00:53:13.121965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.122107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0b1e76d9e584ec6b606e60c217d6cc3f16e4b305', '__omit_place_holder__0b1e76d9e584ec6b606e60c217d6cc3f16e4b305'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-16 00:53:13.122135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-16 00:53:13.122147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.122163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0b1e76d9e584ec6b606e60c217d6cc3f16e4b305', '__omit_place_holder__0b1e76d9e584ec6b606e60c217d6cc3f16e4b305'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-16 00:53:13.122185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-16 00:53:13.122208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.122220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0b1e76d9e584ec6b606e60c217d6cc3f16e4b305', '__omit_place_holder__0b1e76d9e584ec6b606e60c217d6cc3f16e4b305'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-16 00:53:13.122231 | orchestrator | 2025-09-16 00:53:13.122242 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-16 00:53:13.122254 | orchestrator | Tuesday 16 September 2025 00:47:32 +0000 (0:00:03.390) 0:00:26.295 ***** 2025-09-16 00:53:13.122265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-16 00:53:13.122277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-16 00:53:13.122293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-16 00:53:13.122312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-16 00:53:13.122329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-16 00:53:13.122340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-16 00:53:13.122350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-16 00:53:13.122361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-16 00:53:13.122371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-16 00:53:13.122451 | orchestrator | 2025-09-16 00:53:13.122462 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-16 00:53:13.122473 | orchestrator | Tuesday 16 September 2025 00:47:35 +0000 (0:00:03.168) 0:00:29.464 ***** 2025-09-16 00:53:13.122483 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-16 00:53:13.122492 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-16 00:53:13.122506 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-16 00:53:13.122522 | orchestrator | 2025-09-16 00:53:13.122532 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-16 00:53:13.122542 | orchestrator | Tuesday 16 September 2025 00:47:37 +0000 (0:00:02.205) 0:00:31.669 ***** 2025-09-16 00:53:13.122551 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-16 00:53:13.122561 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-16 00:53:13.122571 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-16 00:53:13.122581 | orchestrator | 2025-09-16 00:53:13.122596 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-16 00:53:13.122606 | orchestrator | Tuesday 16 September 2025 00:47:42 +0000 (0:00:05.037) 0:00:36.707 ***** 2025-09-16 00:53:13.122616 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.122626 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.122635 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.122645 | orchestrator | 2025-09-16 00:53:13.122654 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-16 00:53:13.122664 | orchestrator | Tuesday 16 September 2025 00:47:43 +0000 (0:00:00.568) 0:00:37.275 ***** 2025-09-16 00:53:13.122674 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-16 00:53:13.122685 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-16 00:53:13.122694 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-16 00:53:13.122704 | orchestrator | 2025-09-16 00:53:13.122714 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-16 00:53:13.122723 | orchestrator | Tuesday 16 September 2025 00:47:46 +0000 (0:00:03.024) 0:00:40.300 ***** 2025-09-16 00:53:13.122733 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-16 00:53:13.122743 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-16 00:53:13.122753 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-16 00:53:13.122780 | orchestrator | 2025-09-16 00:53:13.122791 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-16 00:53:13.122800 | orchestrator | Tuesday 16 September 2025 00:47:49 +0000 (0:00:03.274) 0:00:43.574 ***** 2025-09-16 00:53:13.122810 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-16 00:53:13.122820 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-16 00:53:13.122830 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-16 00:53:13.122840 | orchestrator | 2025-09-16 00:53:13.122850 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-16 00:53:13.122859 | orchestrator | Tuesday 16 September 2025 00:47:51 +0000 (0:00:01.684) 0:00:45.259 ***** 2025-09-16 00:53:13.122869 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-16 00:53:13.122879 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-16 00:53:13.122888 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-16 00:53:13.122898 | orchestrator | 2025-09-16 00:53:13.122908 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-16 00:53:13.122917 | orchestrator | Tuesday 16 September 2025 00:47:52 +0000 (0:00:01.873) 0:00:47.133 ***** 2025-09-16 00:53:13.122927 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.122937 | orchestrator | 2025-09-16 00:53:13.122946 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-16 00:53:13.122962 | orchestrator | Tuesday 16 September 2025 00:47:53 +0000 (0:00:00.515) 0:00:47.648 ***** 2025-09-16 00:53:13.123004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-16 00:53:13.123021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-16 00:53:13.123062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-16 00:53:13.123074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-16 00:53:13.123084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-16 00:53:13.123095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-16 00:53:13.123204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-16 00:53:13.123215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-16 00:53:13.123229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-16 00:53:13.123239 | orchestrator | 2025-09-16 00:53:13.123249 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-16 00:53:13.123259 | orchestrator | Tuesday 16 September 2025 00:47:57 +0000 (0:00:03.990) 0:00:51.638 ***** 2025-09-16 00:53:13.123276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.123287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.123297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.123307 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.123318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.123333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.123343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.123353 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.123368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.123400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.123411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.123421 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.123431 | orchestrator | 2025-09-16 00:53:13.123441 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-16 00:53:13.123451 | orchestrator | Tuesday 16 September 2025 00:47:58 +0000 (0:00:00.867) 0:00:52.506 ***** 2025-09-16 00:53:13.123461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.123476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.123486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.123496 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.123514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.123530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.123540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.123550 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.123560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.123601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.123612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.123622 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.123632 | orchestrator | 2025-09-16 00:53:13.123684 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-16 00:53:13.123704 | orchestrator | Tuesday 16 September 2025 00:48:01 +0000 (0:00:02.874) 0:00:55.380 ***** 2025-09-16 00:53:13.123722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.123885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.123910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.123929 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.123947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.123976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.123990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.124000 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.124010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.124025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.124042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.124052 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.124062 | orchestrator | 2025-09-16 00:53:13.124072 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-16 00:53:13.124082 | orchestrator | Tuesday 16 September 2025 00:48:01 +0000 (0:00:00.786) 0:00:56.167 ***** 2025-09-16 00:53:13.124092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.124107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.124118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.124127 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.124137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.124147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.124161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.124177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.124188 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.124204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.124214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.124224 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.124234 | orchestrator | 2025-09-16 00:53:13.124243 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-16 00:53:13.124290 | orchestrator | Tuesday 16 September 2025 00:48:02 +0000 (0:00:00.439) 0:00:56.606 ***** 2025-09-16 00:53:13.124335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.124347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.124421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.124433 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.124450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.124467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.124477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.124487 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.124497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.124507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.124517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.124527 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.124537 | orchestrator | 2025-09-16 00:53:13.124547 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-16 00:53:13.124557 | orchestrator | Tuesday 16 September 2025 00:48:03 +0000 (0:00:00.683) 0:00:57.290 ***** 2025-09-16 00:53:13.124571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.124595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.124606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.124616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.124626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.124636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.124646 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.124656 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.124666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.124685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.124701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.124711 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.124750 | orchestrator | 2025-09-16 00:53:13.124804 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-16 00:53:13.124828 | orchestrator | Tuesday 16 September 2025 00:48:04 +0000 (0:00:00.918) 0:00:58.208 ***** 2025-09-16 00:53:13.124839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.124849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.124885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.124896 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.124906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.125041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.125083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.125093 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.125104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.125127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.125138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.125148 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.125158 | orchestrator | 2025-09-16 00:53:13.125168 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-16 00:53:13.125177 | orchestrator | Tuesday 16 September 2025 00:48:04 +0000 (0:00:00.510) 0:00:58.719 ***** 2025-09-16 00:53:13.125188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.125198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.125224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.125235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.125314 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.125327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.125337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.125347 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.125357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-16 00:53:13.125367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-16 00:53:13.125447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-16 00:53:13.125460 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.125470 | orchestrator | 2025-09-16 00:53:13.125487 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-16 00:53:13.125504 | orchestrator | Tuesday 16 September 2025 00:48:05 +0000 (0:00:00.778) 0:00:59.497 ***** 2025-09-16 00:53:13.125520 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-16 00:53:13.125536 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-16 00:53:13.125558 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-16 00:53:13.125576 | orchestrator | 2025-09-16 00:53:13.125592 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-16 00:53:13.125609 | orchestrator | Tuesday 16 September 2025 00:48:07 +0000 (0:00:02.005) 0:01:01.503 ***** 2025-09-16 00:53:13.125625 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-16 00:53:13.125641 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-16 00:53:13.125652 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-16 00:53:13.125661 | orchestrator | 2025-09-16 00:53:13.125685 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-16 00:53:13.125695 | orchestrator | Tuesday 16 September 2025 00:48:10 +0000 (0:00:02.809) 0:01:04.313 ***** 2025-09-16 00:53:13.125705 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-16 00:53:13.125715 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-16 00:53:13.125725 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.125853 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-16 00:53:13.125864 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-16 00:53:13.125874 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-16 00:53:13.125884 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.125894 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-16 00:53:13.125903 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.125913 | orchestrator | 2025-09-16 00:53:13.125983 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-16 00:53:13.125994 | orchestrator | Tuesday 16 September 2025 00:48:11 +0000 (0:00:01.165) 0:01:05.478 ***** 2025-09-16 00:53:13.126005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-16 00:53:13.126054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-16 00:53:13.126068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-16 00:53:13.126092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-16 00:53:13.126104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-16 00:53:13.126114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-16 00:53:13.126124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-16 00:53:13.126140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-16 00:53:13.126150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-16 00:53:13.126160 | orchestrator | 2025-09-16 00:53:13.126169 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-16 00:53:13.126179 | orchestrator | Tuesday 16 September 2025 00:48:14 +0000 (0:00:03.057) 0:01:08.535 ***** 2025-09-16 00:53:13.126188 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.126201 | orchestrator | 2025-09-16 00:53:13.126218 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-16 00:53:13.126234 | orchestrator | Tuesday 16 September 2025 00:48:15 +0000 (0:00:00.949) 0:01:09.485 ***** 2025-09-16 00:53:13.126251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-16 00:53:13.126278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-16 00:53:13.126297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.126382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.126452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.126478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.126555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.126592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.126607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-16 00:53:13.126621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.126645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.126657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.126670 | orchestrator | 2025-09-16 00:53:13.126683 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-16 00:53:13.126697 | orchestrator | Tuesday 16 September 2025 00:48:20 +0000 (0:00:04.729) 0:01:14.215 ***** 2025-09-16 00:53:13.126715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-16 00:53:13.126736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.126748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.126780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.126800 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.126813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-16 00:53:13.126827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.126842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.126861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.126916 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.126982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-16 00:53:13.127010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.127024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.127039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.127054 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.127068 | orchestrator | 2025-09-16 00:53:13.127082 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-16 00:53:13.127090 | orchestrator | Tuesday 16 September 2025 00:48:21 +0000 (0:00:01.298) 0:01:15.514 ***** 2025-09-16 00:53:13.127099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-16 00:53:13.127108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-16 00:53:13.127116 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.127125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-16 00:53:13.127137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-16 00:53:13.127145 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.127153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-16 00:53:13.127161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-16 00:53:13.127170 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.127177 | orchestrator | 2025-09-16 00:53:13.127266 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-16 00:53:13.127277 | orchestrator | Tuesday 16 September 2025 00:48:22 +0000 (0:00:01.336) 0:01:16.850 ***** 2025-09-16 00:53:13.127285 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.127299 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.127307 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.127315 | orchestrator | 2025-09-16 00:53:13.127323 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-16 00:53:13.127331 | orchestrator | Tuesday 16 September 2025 00:48:24 +0000 (0:00:01.380) 0:01:18.231 ***** 2025-09-16 00:53:13.127339 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.127390 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.127400 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.127408 | orchestrator | 2025-09-16 00:53:13.127441 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-16 00:53:13.127456 | orchestrator | Tuesday 16 September 2025 00:48:25 +0000 (0:00:01.712) 0:01:19.943 ***** 2025-09-16 00:53:13.127470 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.127482 | orchestrator | 2025-09-16 00:53:13.127495 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-16 00:53:13.127508 | orchestrator | Tuesday 16 September 2025 00:48:27 +0000 (0:00:01.383) 0:01:21.326 ***** 2025-09-16 00:53:13.127524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.127539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.127554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.127570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.127592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.127600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.127609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.127617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.127625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.127634 | orchestrator | 2025-09-16 00:53:13.127642 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-16 00:53:13.127650 | orchestrator | Tuesday 16 September 2025 00:48:30 +0000 (0:00:03.195) 0:01:24.522 ***** 2025-09-16 00:53:13.127671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.127701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.127709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.127718 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.127726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.127734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.127746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.127839 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.127855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.127864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.127873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.127881 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.127889 | orchestrator | 2025-09-16 00:53:13.127912 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-16 00:53:13.127921 | orchestrator | Tuesday 16 September 2025 00:48:30 +0000 (0:00:00.542) 0:01:25.064 ***** 2025-09-16 00:53:13.127929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-16 00:53:13.127939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-16 00:53:13.127947 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.127956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-16 00:53:13.127964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-16 00:53:13.127976 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.127985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-16 00:53:13.127993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-16 00:53:13.128001 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.128008 | orchestrator | 2025-09-16 00:53:13.128020 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-16 00:53:13.128028 | orchestrator | Tuesday 16 September 2025 00:48:31 +0000 (0:00:00.924) 0:01:25.988 ***** 2025-09-16 00:53:13.128036 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.128044 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.128052 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.128059 | orchestrator | 2025-09-16 00:53:13.128067 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-16 00:53:13.128075 | orchestrator | Tuesday 16 September 2025 00:48:33 +0000 (0:00:01.473) 0:01:27.461 ***** 2025-09-16 00:53:13.128083 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.128091 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.128098 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.128106 | orchestrator | 2025-09-16 00:53:13.128118 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-16 00:53:13.128127 | orchestrator | Tuesday 16 September 2025 00:48:35 +0000 (0:00:02.057) 0:01:29.518 ***** 2025-09-16 00:53:13.128134 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.128142 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.128150 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.128158 | orchestrator | 2025-09-16 00:53:13.128166 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-16 00:53:13.128173 | orchestrator | Tuesday 16 September 2025 00:48:35 +0000 (0:00:00.318) 0:01:29.837 ***** 2025-09-16 00:53:13.128181 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.128189 | orchestrator | 2025-09-16 00:53:13.128196 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-16 00:53:13.128204 | orchestrator | Tuesday 16 September 2025 00:48:36 +0000 (0:00:00.813) 0:01:30.650 ***** 2025-09-16 00:53:13.128212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-16 00:53:13.128222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-16 00:53:13.128236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-16 00:53:13.128244 | orchestrator | 2025-09-16 00:53:13.128252 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-16 00:53:13.128260 | orchestrator | Tuesday 16 September 2025 00:48:39 +0000 (0:00:02.596) 0:01:33.247 ***** 2025-09-16 00:53:13.128276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-16 00:53:13.128284 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.128291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-16 00:53:13.128298 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.128305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-16 00:53:13.128317 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.128323 | orchestrator | 2025-09-16 00:53:13.128330 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-16 00:53:13.128337 | orchestrator | Tuesday 16 September 2025 00:48:40 +0000 (0:00:01.403) 0:01:34.651 ***** 2025-09-16 00:53:13.128344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-16 00:53:13.128353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-16 00:53:13.128361 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.128368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-16 00:53:13.128377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-16 00:53:13.128385 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.128395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-16 00:53:13.128402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-16 00:53:13.128410 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.128416 | orchestrator | 2025-09-16 00:53:13.128423 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-16 00:53:13.128430 | orchestrator | Tuesday 16 September 2025 00:48:42 +0000 (0:00:01.723) 0:01:36.375 ***** 2025-09-16 00:53:13.128436 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.128443 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.128450 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.128456 | orchestrator | 2025-09-16 00:53:13.128463 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-16 00:53:13.128470 | orchestrator | Tuesday 16 September 2025 00:48:42 +0000 (0:00:00.655) 0:01:37.031 ***** 2025-09-16 00:53:13.128476 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.128488 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.128495 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.128501 | orchestrator | 2025-09-16 00:53:13.128508 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-16 00:53:13.128514 | orchestrator | Tuesday 16 September 2025 00:48:44 +0000 (0:00:01.177) 0:01:38.208 ***** 2025-09-16 00:53:13.128521 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.128528 | orchestrator | 2025-09-16 00:53:13.128534 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-16 00:53:13.128541 | orchestrator | Tuesday 16 September 2025 00:48:44 +0000 (0:00:00.752) 0:01:38.961 ***** 2025-09-16 00:53:13.128548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.128555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.128617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.128648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128716 | orchestrator | 2025-09-16 00:53:13.128723 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-16 00:53:13.128730 | orchestrator | Tuesday 16 September 2025 00:48:48 +0000 (0:00:03.902) 0:01:42.863 ***** 2025-09-16 00:53:13.128737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.128748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128796 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.128804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.128810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128857 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.128864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.128871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.128896 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.128902 | orchestrator | 2025-09-16 00:53:13.128909 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-16 00:53:13.128916 | orchestrator | Tuesday 16 September 2025 00:48:50 +0000 (0:00:01.327) 0:01:44.190 ***** 2025-09-16 00:53:13.128923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-16 00:53:13.128938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-16 00:53:13.128945 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.128952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-16 00:53:13.128959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-16 00:53:13.128965 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.128972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-16 00:53:13.128979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-16 00:53:13.128986 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.128992 | orchestrator | 2025-09-16 00:53:13.128999 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-16 00:53:13.129006 | orchestrator | Tuesday 16 September 2025 00:48:50 +0000 (0:00:00.957) 0:01:45.148 ***** 2025-09-16 00:53:13.129012 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.129019 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.129025 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.129032 | orchestrator | 2025-09-16 00:53:13.129039 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-16 00:53:13.129045 | orchestrator | Tuesday 16 September 2025 00:48:52 +0000 (0:00:01.370) 0:01:46.518 ***** 2025-09-16 00:53:13.129052 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.129058 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.129065 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.129072 | orchestrator | 2025-09-16 00:53:13.129078 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-16 00:53:13.129085 | orchestrator | Tuesday 16 September 2025 00:48:54 +0000 (0:00:02.156) 0:01:48.675 ***** 2025-09-16 00:53:13.129091 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.129098 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.129105 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.129111 | orchestrator | 2025-09-16 00:53:13.129118 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-16 00:53:13.129124 | orchestrator | Tuesday 16 September 2025 00:48:55 +0000 (0:00:00.573) 0:01:49.249 ***** 2025-09-16 00:53:13.129131 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.129137 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.129144 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.129151 | orchestrator | 2025-09-16 00:53:13.129157 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-16 00:53:13.129164 | orchestrator | Tuesday 16 September 2025 00:48:55 +0000 (0:00:00.368) 0:01:49.617 ***** 2025-09-16 00:53:13.129170 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.129177 | orchestrator | 2025-09-16 00:53:13.129183 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-16 00:53:13.129190 | orchestrator | Tuesday 16 September 2025 00:48:56 +0000 (0:00:00.803) 0:01:50.421 ***** 2025-09-16 00:53:13.129196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 00:53:13.129215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-16 00:53:13.129222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 00:53:13.129237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-16 00:53:13.129265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 00:53:13.129337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-16 00:53:13.129344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129386 | orchestrator | 2025-09-16 00:53:13.129393 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-16 00:53:13.129400 | orchestrator | Tuesday 16 September 2025 00:48:59 +0000 (0:00:03.713) 0:01:54.134 ***** 2025-09-16 00:53:13.129411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 00:53:13.129419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 00:53:13.129426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-16 00:53:13.129437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-16 00:53:13.129444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 00:53:13.129896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-16 00:53:13.129954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.129986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.130003 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.130054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.130069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.130081 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.130100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.130121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.130170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.130177 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.130184 | orchestrator | 2025-09-16 00:53:13.130191 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-16 00:53:13.130198 | orchestrator | Tuesday 16 September 2025 00:49:00 +0000 (0:00:00.880) 0:01:55.015 ***** 2025-09-16 00:53:13.130205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-16 00:53:13.130282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-16 00:53:13.130290 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.130297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-16 00:53:13.130304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-16 00:53:13.130311 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.130328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-16 00:53:13.130335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-16 00:53:13.130342 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.130348 | orchestrator | 2025-09-16 00:53:13.130355 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-16 00:53:13.130362 | orchestrator | Tuesday 16 September 2025 00:49:01 +0000 (0:00:00.976) 0:01:55.992 ***** 2025-09-16 00:53:13.130368 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.130375 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.130382 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.130388 | orchestrator | 2025-09-16 00:53:13.130395 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-16 00:53:13.130402 | orchestrator | Tuesday 16 September 2025 00:49:03 +0000 (0:00:01.654) 0:01:57.646 ***** 2025-09-16 00:53:13.130408 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.130415 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.130421 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.130428 | orchestrator | 2025-09-16 00:53:13.130442 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-16 00:53:13.130448 | orchestrator | Tuesday 16 September 2025 00:49:05 +0000 (0:00:01.709) 0:01:59.356 ***** 2025-09-16 00:53:13.130455 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.130464 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.130471 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.130479 | orchestrator | 2025-09-16 00:53:13.130487 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-16 00:53:13.130495 | orchestrator | Tuesday 16 September 2025 00:49:05 +0000 (0:00:00.474) 0:01:59.830 ***** 2025-09-16 00:53:13.130506 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.130514 | orchestrator | 2025-09-16 00:53:13.130522 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-16 00:53:13.130530 | orchestrator | Tuesday 16 September 2025 00:49:06 +0000 (0:00:00.743) 0:02:00.573 ***** 2025-09-16 00:53:13.130546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-16 00:53:13.130562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-16 00:53:13.130580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-16 00:53:13.130594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-16 00:53:13.130611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-16 00:53:13.130625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-16 00:53:13.130633 | orchestrator | 2025-09-16 00:53:13.130642 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-16 00:53:13.130655 | orchestrator | Tuesday 16 September 2025 00:49:10 +0000 (0:00:04.063) 0:02:04.637 ***** 2025-09-16 00:53:13.130676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-16 00:53:13.130725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-16 00:53:13.130745 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.130806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-16 00:53:13.130833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-16 00:53:13.130848 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.130856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-16 00:53:13.130880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-16 00:53:13.130903 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.130915 | orchestrator | 2025-09-16 00:53:13.130926 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-16 00:53:13.130938 | orchestrator | Tuesday 16 September 2025 00:49:13 +0000 (0:00:02.964) 0:02:07.601 ***** 2025-09-16 00:53:13.130948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-16 00:53:13.130957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-16 00:53:13.130965 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.130977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-16 00:53:13.130989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-16 00:53:13.131001 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.131017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-16 00:53:13.131042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-16 00:53:13.131054 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.131066 | orchestrator | 2025-09-16 00:53:13.131078 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-16 00:53:13.131089 | orchestrator | Tuesday 16 September 2025 00:49:16 +0000 (0:00:03.501) 0:02:11.103 ***** 2025-09-16 00:53:13.131100 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.131111 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.131123 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.131134 | orchestrator | 2025-09-16 00:53:13.131145 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-16 00:53:13.131155 | orchestrator | Tuesday 16 September 2025 00:49:18 +0000 (0:00:01.252) 0:02:12.355 ***** 2025-09-16 00:53:13.131166 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.131176 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.131188 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.131198 | orchestrator | 2025-09-16 00:53:13.131208 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-16 00:53:13.131218 | orchestrator | Tuesday 16 September 2025 00:49:20 +0000 (0:00:02.171) 0:02:14.527 ***** 2025-09-16 00:53:13.131227 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.131237 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.131247 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.131256 | orchestrator | 2025-09-16 00:53:13.131265 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-16 00:53:13.131276 | orchestrator | Tuesday 16 September 2025 00:49:20 +0000 (0:00:00.504) 0:02:15.032 ***** 2025-09-16 00:53:13.131287 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.131298 | orchestrator | 2025-09-16 00:53:13.131309 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-16 00:53:13.131320 | orchestrator | Tuesday 16 September 2025 00:49:21 +0000 (0:00:00.924) 0:02:15.956 ***** 2025-09-16 00:53:13.131332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 00:53:13.131342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 00:53:13.131364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 00:53:13.131375 | orchestrator | 2025-09-16 00:53:13.131386 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-16 00:53:13.131396 | orchestrator | Tuesday 16 September 2025 00:49:25 +0000 (0:00:03.732) 0:02:19.689 ***** 2025-09-16 00:53:13.131415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-16 00:53:13.131427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-16 00:53:13.131439 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.131449 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.131460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-16 00:53:13.131470 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.131480 | orchestrator | 2025-09-16 00:53:13.131490 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-16 00:53:13.131500 | orchestrator | Tuesday 16 September 2025 00:49:26 +0000 (0:00:00.819) 0:02:20.509 ***** 2025-09-16 00:53:13.131510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-16 00:53:13.131521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-16 00:53:13.131539 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.131549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-16 00:53:13.131559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-16 00:53:13.131570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-16 00:53:13.131580 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.131591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-16 00:53:13.131601 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.131611 | orchestrator | 2025-09-16 00:53:13.131622 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-16 00:53:13.131632 | orchestrator | Tuesday 16 September 2025 00:49:26 +0000 (0:00:00.618) 0:02:21.127 ***** 2025-09-16 00:53:13.131642 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.131653 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.131667 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.131678 | orchestrator | 2025-09-16 00:53:13.131685 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-16 00:53:13.131691 | orchestrator | Tuesday 16 September 2025 00:49:28 +0000 (0:00:01.145) 0:02:22.272 ***** 2025-09-16 00:53:13.131698 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.131704 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.131710 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.131716 | orchestrator | 2025-09-16 00:53:13.131723 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-16 00:53:13.131729 | orchestrator | Tuesday 16 September 2025 00:49:30 +0000 (0:00:02.209) 0:02:24.482 ***** 2025-09-16 00:53:13.131736 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.131742 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.131753 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.131777 | orchestrator | 2025-09-16 00:53:13.131783 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-16 00:53:13.131790 | orchestrator | Tuesday 16 September 2025 00:49:31 +0000 (0:00:00.762) 0:02:25.244 ***** 2025-09-16 00:53:13.131796 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.131802 | orchestrator | 2025-09-16 00:53:13.131808 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-16 00:53:13.131814 | orchestrator | Tuesday 16 September 2025 00:49:32 +0000 (0:00:00.931) 0:02:26.176 ***** 2025-09-16 00:53:13.131822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-16 00:53:13.131844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-16 00:53:13.131852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-16 00:53:13.131863 | orchestrator | 2025-09-16 00:53:13.131870 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-16 00:53:13.131876 | orchestrator | Tuesday 16 September 2025 00:49:35 +0000 (0:00:03.430) 0:02:29.606 ***** 2025-09-16 00:53:13.131891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-16 00:53:13.131898 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.131906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-16 00:53:13.131917 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.131931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-16 00:53:13.131945 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.131952 | orchestrator | 2025-09-16 00:53:13.131958 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-16 00:53:13.131964 | orchestrator | Tuesday 16 September 2025 00:49:36 +0000 (0:00:01.240) 0:02:30.847 ***** 2025-09-16 00:53:13.131970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-16 00:53:13.131978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-16 00:53:13.131984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-16 00:53:13.131991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-16 00:53:13.131999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-16 00:53:13.132005 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.132012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-16 00:53:13.132024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-16 00:53:13.132031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-16 00:53:13.132041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-16 00:53:13.132047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-16 00:53:13.132054 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.132060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-16 00:53:13.132072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-16 00:53:13.132078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-16 00:53:13.132085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-16 00:53:13.132091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-16 00:53:13.132098 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.132104 | orchestrator | 2025-09-16 00:53:13.132110 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-16 00:53:13.132116 | orchestrator | Tuesday 16 September 2025 00:49:37 +0000 (0:00:00.971) 0:02:31.818 ***** 2025-09-16 00:53:13.132123 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.132129 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.132135 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.132141 | orchestrator | 2025-09-16 00:53:13.132148 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-16 00:53:13.132154 | orchestrator | Tuesday 16 September 2025 00:49:38 +0000 (0:00:01.277) 0:02:33.096 ***** 2025-09-16 00:53:13.132160 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.132166 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.132172 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.132178 | orchestrator | 2025-09-16 00:53:13.132184 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-16 00:53:13.132190 | orchestrator | Tuesday 16 September 2025 00:49:41 +0000 (0:00:02.079) 0:02:35.176 ***** 2025-09-16 00:53:13.132197 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.132203 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.132209 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.132215 | orchestrator | 2025-09-16 00:53:13.132221 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-16 00:53:13.132227 | orchestrator | Tuesday 16 September 2025 00:49:41 +0000 (0:00:00.336) 0:02:35.512 ***** 2025-09-16 00:53:13.132233 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.132240 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.132246 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.132252 | orchestrator | 2025-09-16 00:53:13.132258 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-16 00:53:13.132264 | orchestrator | Tuesday 16 September 2025 00:49:41 +0000 (0:00:00.513) 0:02:36.026 ***** 2025-09-16 00:53:13.132270 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.132276 | orchestrator | 2025-09-16 00:53:13.132282 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-16 00:53:13.132291 | orchestrator | Tuesday 16 September 2025 00:49:42 +0000 (0:00:00.934) 0:02:36.961 ***** 2025-09-16 00:53:13.132545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:53:13.132566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:53:13.132573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-16 00:53:13.132580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:53:13.132587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:53:13.132598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-16 00:53:13.132653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:53:13.132662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:53:13.132669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-16 00:53:13.132676 | orchestrator | 2025-09-16 00:53:13.132682 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-16 00:53:13.132688 | orchestrator | Tuesday 16 September 2025 00:49:46 +0000 (0:00:03.376) 0:02:40.337 ***** 2025-09-16 00:53:13.132695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-16 00:53:13.132711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:53:13.132773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-16 00:53:13.132783 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.132790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-16 00:53:13.132797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:53:13.132804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-16 00:53:13.132811 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.132821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-16 00:53:13.132872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:53:13.132881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-16 00:53:13.132888 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.132894 | orchestrator | 2025-09-16 00:53:13.132901 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-16 00:53:13.132907 | orchestrator | Tuesday 16 September 2025 00:49:47 +0000 (0:00:00.873) 0:02:41.211 ***** 2025-09-16 00:53:13.132913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-16 00:53:13.132921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-16 00:53:13.132927 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.132934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-16 00:53:13.132940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-16 00:53:13.132947 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.132953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-16 00:53:13.132960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-16 00:53:13.132971 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.132977 | orchestrator | 2025-09-16 00:53:13.132983 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-16 00:53:13.132989 | orchestrator | Tuesday 16 September 2025 00:49:47 +0000 (0:00:00.808) 0:02:42.019 ***** 2025-09-16 00:53:13.132995 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.133001 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.133008 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.133014 | orchestrator | 2025-09-16 00:53:13.133020 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-16 00:53:13.133026 | orchestrator | Tuesday 16 September 2025 00:49:49 +0000 (0:00:01.210) 0:02:43.230 ***** 2025-09-16 00:53:13.133032 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.133038 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.133044 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.133050 | orchestrator | 2025-09-16 00:53:13.133280 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-16 00:53:13.133289 | orchestrator | Tuesday 16 September 2025 00:49:51 +0000 (0:00:01.982) 0:02:45.212 ***** 2025-09-16 00:53:13.133295 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.133306 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.133312 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.133318 | orchestrator | 2025-09-16 00:53:13.133324 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-16 00:53:13.133331 | orchestrator | Tuesday 16 September 2025 00:49:51 +0000 (0:00:00.542) 0:02:45.755 ***** 2025-09-16 00:53:13.133337 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.133343 | orchestrator | 2025-09-16 00:53:13.133349 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-16 00:53:13.133355 | orchestrator | Tuesday 16 September 2025 00:49:52 +0000 (0:00:01.114) 0:02:46.870 ***** 2025-09-16 00:53:13.133413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 00:53:13.133424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 00:53:13.133432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.133445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.133455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 00:53:13.133500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.133509 | orchestrator | 2025-09-16 00:53:13.133515 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-16 00:53:13.133522 | orchestrator | Tuesday 16 September 2025 00:49:56 +0000 (0:00:04.027) 0:02:50.897 ***** 2025-09-16 00:53:13.133528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-16 00:53:13.133543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-16 00:53:13.133550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.133600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.133609 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.133616 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.133622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-16 00:53:13.133629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.133642 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.133648 | orchestrator | 2025-09-16 00:53:13.133654 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-16 00:53:13.133661 | orchestrator | Tuesday 16 September 2025 00:49:57 +0000 (0:00:01.081) 0:02:51.979 ***** 2025-09-16 00:53:13.133667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-16 00:53:13.133674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-16 00:53:13.133680 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.133687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-16 00:53:13.133693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-16 00:53:13.133699 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.133769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-16 00:53:13.133777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-16 00:53:13.133783 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.133789 | orchestrator | 2025-09-16 00:53:13.133796 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-16 00:53:13.133802 | orchestrator | Tuesday 16 September 2025 00:49:58 +0000 (0:00:00.864) 0:02:52.844 ***** 2025-09-16 00:53:13.133808 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.133815 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.133825 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.133831 | orchestrator | 2025-09-16 00:53:13.133838 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-16 00:53:13.133844 | orchestrator | Tuesday 16 September 2025 00:49:59 +0000 (0:00:01.204) 0:02:54.049 ***** 2025-09-16 00:53:13.133850 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.133857 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.133902 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.133910 | orchestrator | 2025-09-16 00:53:13.133916 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-16 00:53:13.133922 | orchestrator | Tuesday 16 September 2025 00:50:01 +0000 (0:00:02.020) 0:02:56.069 ***** 2025-09-16 00:53:13.133990 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.134000 | orchestrator | 2025-09-16 00:53:13.134007 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-16 00:53:13.134036 | orchestrator | Tuesday 16 September 2025 00:50:03 +0000 (0:00:01.291) 0:02:57.360 ***** 2025-09-16 00:53:13.134046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-16 00:53:13.134059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-16 00:53:13.134066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-16 00:53:13.134218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134309 | orchestrator | 2025-09-16 00:53:13.134316 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-16 00:53:13.134327 | orchestrator | Tuesday 16 September 2025 00:50:06 +0000 (0:00:03.345) 0:03:00.705 ***** 2025-09-16 00:53:13.134334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-16 00:53:13.134340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134360 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.134370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-16 00:53:13.134679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134718 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.134725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-16 00:53:13.134731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.134859 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.134866 | orchestrator | 2025-09-16 00:53:13.134872 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-16 00:53:13.134879 | orchestrator | Tuesday 16 September 2025 00:50:07 +0000 (0:00:00.655) 0:03:01.361 ***** 2025-09-16 00:53:13.134885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-16 00:53:13.134892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-16 00:53:13.134899 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.134906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-16 00:53:13.134912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-16 00:53:13.134918 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.134924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-16 00:53:13.134930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-16 00:53:13.134937 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.134943 | orchestrator | 2025-09-16 00:53:13.134949 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-16 00:53:13.134955 | orchestrator | Tuesday 16 September 2025 00:50:08 +0000 (0:00:01.411) 0:03:02.772 ***** 2025-09-16 00:53:13.134961 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.134967 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.134974 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.134980 | orchestrator | 2025-09-16 00:53:13.134986 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-16 00:53:13.134992 | orchestrator | Tuesday 16 September 2025 00:50:09 +0000 (0:00:01.205) 0:03:03.977 ***** 2025-09-16 00:53:13.134998 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.135004 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.135010 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.135017 | orchestrator | 2025-09-16 00:53:13.135023 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-16 00:53:13.135029 | orchestrator | Tuesday 16 September 2025 00:50:11 +0000 (0:00:01.946) 0:03:05.923 ***** 2025-09-16 00:53:13.135035 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.135041 | orchestrator | 2025-09-16 00:53:13.135047 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-16 00:53:13.135054 | orchestrator | Tuesday 16 September 2025 00:50:13 +0000 (0:00:01.264) 0:03:07.188 ***** 2025-09-16 00:53:13.135060 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-16 00:53:13.135066 | orchestrator | 2025-09-16 00:53:13.135072 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-16 00:53:13.135082 | orchestrator | Tuesday 16 September 2025 00:50:15 +0000 (0:00:02.668) 0:03:09.856 ***** 2025-09-16 00:53:13.135135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-16 00:53:13.135146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-16 00:53:13.135152 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.135159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-16 00:53:13.135174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-16 00:53:13.135181 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.135229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-16 00:53:13.135239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-16 00:53:13.135245 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.135252 | orchestrator | 2025-09-16 00:53:13.135258 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-16 00:53:13.135264 | orchestrator | Tuesday 16 September 2025 00:50:17 +0000 (0:00:02.045) 0:03:11.902 ***** 2025-09-16 00:53:13.135275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-16 00:53:13.135326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-16 00:53:13.135335 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.135342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-16 00:53:13.135357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-16 00:53:13.135363 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.135412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-16 00:53:13.135422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-16 00:53:13.135429 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.135435 | orchestrator | 2025-09-16 00:53:13.135441 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-16 00:53:13.135448 | orchestrator | Tuesday 16 September 2025 00:50:19 +0000 (0:00:02.186) 0:03:14.089 ***** 2025-09-16 00:53:13.135468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-16 00:53:13.135479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-16 00:53:13.135486 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.135493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-16 00:53:13.135502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-16 00:53:13.135509 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.135557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-16 00:53:13.135567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-16 00:53:13.135573 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.135580 | orchestrator | 2025-09-16 00:53:13.135586 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-16 00:53:13.135592 | orchestrator | Tuesday 16 September 2025 00:50:22 +0000 (0:00:02.913) 0:03:17.003 ***** 2025-09-16 00:53:13.135598 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.135605 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.135611 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.135617 | orchestrator | 2025-09-16 00:53:13.135623 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-16 00:53:13.135646 | orchestrator | Tuesday 16 September 2025 00:50:24 +0000 (0:00:01.932) 0:03:18.936 ***** 2025-09-16 00:53:13.135652 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.135658 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.135665 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.135671 | orchestrator | 2025-09-16 00:53:13.135677 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-16 00:53:13.135684 | orchestrator | Tuesday 16 September 2025 00:50:26 +0000 (0:00:01.432) 0:03:20.368 ***** 2025-09-16 00:53:13.135690 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.135696 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.135702 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.135709 | orchestrator | 2025-09-16 00:53:13.135715 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-16 00:53:13.135721 | orchestrator | Tuesday 16 September 2025 00:50:26 +0000 (0:00:00.321) 0:03:20.690 ***** 2025-09-16 00:53:13.135727 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.135734 | orchestrator | 2025-09-16 00:53:13.135740 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-16 00:53:13.135746 | orchestrator | Tuesday 16 September 2025 00:50:27 +0000 (0:00:01.279) 0:03:21.969 ***** 2025-09-16 00:53:13.135754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-16 00:53:13.135810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-16 00:53:13.135873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-16 00:53:13.135882 | orchestrator | 2025-09-16 00:53:13.135889 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-16 00:53:13.135895 | orchestrator | Tuesday 16 September 2025 00:50:29 +0000 (0:00:01.507) 0:03:23.477 ***** 2025-09-16 00:53:13.135902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-16 00:53:13.135914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-16 00:53:13.135920 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.135927 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.135933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-16 00:53:13.135940 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.135946 | orchestrator | 2025-09-16 00:53:13.135952 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-16 00:53:13.135958 | orchestrator | Tuesday 16 September 2025 00:50:29 +0000 (0:00:00.359) 0:03:23.837 ***** 2025-09-16 00:53:13.135966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-16 00:53:13.135976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-16 00:53:13.135982 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.135987 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.136027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-16 00:53:13.136035 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.136041 | orchestrator | 2025-09-16 00:53:13.136046 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-16 00:53:13.136052 | orchestrator | Tuesday 16 September 2025 00:50:30 +0000 (0:00:00.799) 0:03:24.637 ***** 2025-09-16 00:53:13.136061 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.136067 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.136072 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.136078 | orchestrator | 2025-09-16 00:53:13.136083 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-16 00:53:13.136089 | orchestrator | Tuesday 16 September 2025 00:50:30 +0000 (0:00:00.429) 0:03:25.066 ***** 2025-09-16 00:53:13.136094 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.136099 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.136105 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.136110 | orchestrator | 2025-09-16 00:53:13.136115 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-16 00:53:13.136121 | orchestrator | Tuesday 16 September 2025 00:50:32 +0000 (0:00:01.212) 0:03:26.278 ***** 2025-09-16 00:53:13.136126 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.136132 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.136137 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.136142 | orchestrator | 2025-09-16 00:53:13.136148 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-16 00:53:13.136153 | orchestrator | Tuesday 16 September 2025 00:50:32 +0000 (0:00:00.298) 0:03:26.577 ***** 2025-09-16 00:53:13.136159 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.136164 | orchestrator | 2025-09-16 00:53:13.136169 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-16 00:53:13.136175 | orchestrator | Tuesday 16 September 2025 00:50:33 +0000 (0:00:01.378) 0:03:27.956 ***** 2025-09-16 00:53:13.136180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 00:53:13.136187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 00:53:13.136240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 00:53:13.136248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-16 00:53:13.136364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-16 00:53:13.136444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-16 00:53:13.136466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.136485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.136517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.136572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.136581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.136591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.136624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 00:53:13.136701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 00:53:13.136714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-16 00:53:13.136751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 00:53:13.136786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-16 00:53:13.136864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.136874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.136886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-16 00:53:13.136903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.136959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-16 00:53:13.136968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-16 00:53:13.136974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.136980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-16 00:53:13.136995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-16 00:53:13.137036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-16 00:53:13.137044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-16 00:53:13.137050 | orchestrator | 2025-09-16 00:53:13.137056 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-16 00:53:13.137062 | orchestrator | Tuesday 16 September 2025 00:50:37 +0000 (0:00:04.012) 0:03:31.968 ***** 2025-09-16 00:53:13.137068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 00:53:13.137074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-16 00:53:13.137152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.137170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 00:53:13.137176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.137220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 00:53:13.137249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-16 00:53:13.137317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-16 00:53:13.137323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.137339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.137346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 00:53:13.137399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.137405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-16 00:53:13.137420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-16 00:53:13.137474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 00:53:13.137488 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.137494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-16 00:53:13.137519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-16 00:53:13.137561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.137569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.137591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-16 00:53:13.137596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.137602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-16 00:53:13.137622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137629 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.137643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 00:53:13.137652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-16 00:53:13.137664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-16 00:53:13.137685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.137706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-16 00:53:13.137713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-16 00:53:13.137722 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.137728 | orchestrator | 2025-09-16 00:53:13.137733 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-16 00:53:13.137739 | orchestrator | Tuesday 16 September 2025 00:50:39 +0000 (0:00:01.406) 0:03:33.375 ***** 2025-09-16 00:53:13.137745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-16 00:53:13.137751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-16 00:53:13.137815 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.137826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-16 00:53:13.137835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-16 00:53:13.137843 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.137852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-16 00:53:13.137861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-16 00:53:13.137870 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.137879 | orchestrator | 2025-09-16 00:53:13.137888 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-16 00:53:13.137898 | orchestrator | Tuesday 16 September 2025 00:50:41 +0000 (0:00:01.951) 0:03:35.327 ***** 2025-09-16 00:53:13.137907 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.137916 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.137925 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.137934 | orchestrator | 2025-09-16 00:53:13.137942 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-16 00:53:13.137951 | orchestrator | Tuesday 16 September 2025 00:50:42 +0000 (0:00:01.331) 0:03:36.658 ***** 2025-09-16 00:53:13.137960 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.137968 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.137973 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.137979 | orchestrator | 2025-09-16 00:53:13.137984 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-16 00:53:13.137990 | orchestrator | Tuesday 16 September 2025 00:50:44 +0000 (0:00:02.018) 0:03:38.677 ***** 2025-09-16 00:53:13.137995 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.138001 | orchestrator | 2025-09-16 00:53:13.138006 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-16 00:53:13.138037 | orchestrator | Tuesday 16 September 2025 00:50:45 +0000 (0:00:01.175) 0:03:39.853 ***** 2025-09-16 00:53:13.138069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.138083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.138089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.138095 | orchestrator | 2025-09-16 00:53:13.138100 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-16 00:53:13.138105 | orchestrator | Tuesday 16 September 2025 00:50:49 +0000 (0:00:03.708) 0:03:43.561 ***** 2025-09-16 00:53:13.138111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.138117 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.138140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.138153 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.138159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.138165 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.138170 | orchestrator | 2025-09-16 00:53:13.138176 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-16 00:53:13.138181 | orchestrator | Tuesday 16 September 2025 00:50:49 +0000 (0:00:00.493) 0:03:44.054 ***** 2025-09-16 00:53:13.138187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138200 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.138207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138220 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.138226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138239 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.138246 | orchestrator | 2025-09-16 00:53:13.138252 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-16 00:53:13.138258 | orchestrator | Tuesday 16 September 2025 00:50:50 +0000 (0:00:00.768) 0:03:44.823 ***** 2025-09-16 00:53:13.138264 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.138271 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.138277 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.138283 | orchestrator | 2025-09-16 00:53:13.138289 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-16 00:53:13.138299 | orchestrator | Tuesday 16 September 2025 00:50:51 +0000 (0:00:01.306) 0:03:46.130 ***** 2025-09-16 00:53:13.138305 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.138311 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.138318 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.138324 | orchestrator | 2025-09-16 00:53:13.138330 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-16 00:53:13.138336 | orchestrator | Tuesday 16 September 2025 00:50:54 +0000 (0:00:02.133) 0:03:48.263 ***** 2025-09-16 00:53:13.138341 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.138346 | orchestrator | 2025-09-16 00:53:13.138355 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-16 00:53:13.138360 | orchestrator | Tuesday 16 September 2025 00:50:55 +0000 (0:00:01.499) 0:03:49.762 ***** 2025-09-16 00:53:13.138380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.138388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.138394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.138401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.138426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.138433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.138439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.138445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.138450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.138470 | orchestrator | 2025-09-16 00:53:13.138476 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-16 00:53:13.138481 | orchestrator | Tuesday 16 September 2025 00:50:59 +0000 (0:00:04.240) 0:03:54.003 ***** 2025-09-16 00:53:13.138502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.138509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.138515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.138520 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.138526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.138536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.138544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.138549 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.138568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.138574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.138579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.138587 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.138592 | orchestrator | 2025-09-16 00:53:13.138597 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-16 00:53:13.138602 | orchestrator | Tuesday 16 September 2025 00:51:00 +0000 (0:00:01.007) 0:03:55.010 ***** 2025-09-16 00:53:13.138607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138628 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.138633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138668 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.138673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-16 00:53:13.138693 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.138698 | orchestrator | 2025-09-16 00:53:13.138703 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-16 00:53:13.138708 | orchestrator | Tuesday 16 September 2025 00:51:02 +0000 (0:00:01.179) 0:03:56.190 ***** 2025-09-16 00:53:13.138712 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.138717 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.138722 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.138727 | orchestrator | 2025-09-16 00:53:13.138732 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-16 00:53:13.138741 | orchestrator | Tuesday 16 September 2025 00:51:03 +0000 (0:00:01.445) 0:03:57.635 ***** 2025-09-16 00:53:13.138746 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.138751 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.138803 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.138808 | orchestrator | 2025-09-16 00:53:13.138813 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-16 00:53:13.138818 | orchestrator | Tuesday 16 September 2025 00:51:05 +0000 (0:00:02.057) 0:03:59.693 ***** 2025-09-16 00:53:13.138823 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.138828 | orchestrator | 2025-09-16 00:53:13.138832 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-16 00:53:13.138837 | orchestrator | Tuesday 16 September 2025 00:51:07 +0000 (0:00:01.481) 0:04:01.175 ***** 2025-09-16 00:53:13.138842 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-16 00:53:13.138847 | orchestrator | 2025-09-16 00:53:13.138852 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-16 00:53:13.138857 | orchestrator | Tuesday 16 September 2025 00:51:07 +0000 (0:00:00.846) 0:04:02.022 ***** 2025-09-16 00:53:13.138862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-16 00:53:13.138867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-16 00:53:13.138875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-16 00:53:13.138880 | orchestrator | 2025-09-16 00:53:13.138885 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-16 00:53:13.138890 | orchestrator | Tuesday 16 September 2025 00:51:11 +0000 (0:00:03.985) 0:04:06.007 ***** 2025-09-16 00:53:13.138909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-16 00:53:13.138915 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.138920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-16 00:53:13.138929 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.138934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-16 00:53:13.138939 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.138944 | orchestrator | 2025-09-16 00:53:13.138949 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-16 00:53:13.138954 | orchestrator | Tuesday 16 September 2025 00:51:13 +0000 (0:00:01.337) 0:04:07.344 ***** 2025-09-16 00:53:13.138958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-16 00:53:13.138964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}) [0m 2025-09-16 00:53:13.138970 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.138976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-16 00:53:13.138984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-16 00:53:13.138992 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.139000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-16 00:53:13.139008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-16 00:53:13.139016 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.139024 | orchestrator | 2025-09-16 00:53:13.139030 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-16 00:53:13.139039 | orchestrator | Tuesday 16 September 2025 00:51:14 +0000 (0:00:01.403) 0:04:08.748 ***** 2025-09-16 00:53:13.139046 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.139054 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.139063 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.139070 | orchestrator | 2025-09-16 00:53:13.139081 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-16 00:53:13.139090 | orchestrator | Tuesday 16 September 2025 00:51:16 +0000 (0:00:02.389) 0:04:11.138 ***** 2025-09-16 00:53:13.139095 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.139100 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.139105 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.139110 | orchestrator | 2025-09-16 00:53:13.139115 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-16 00:53:13.139125 | orchestrator | Tuesday 16 September 2025 00:51:19 +0000 (0:00:02.941) 0:04:14.080 ***** 2025-09-16 00:53:13.139148 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-16 00:53:13.139153 | orchestrator | 2025-09-16 00:53:13.139158 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-16 00:53:13.139163 | orchestrator | Tuesday 16 September 2025 00:51:21 +0000 (0:00:01.323) 0:04:15.403 ***** 2025-09-16 00:53:13.139168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-16 00:53:13.139173 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.139178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-16 00:53:13.139183 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.139189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-16 00:53:13.139194 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.139198 | orchestrator | 2025-09-16 00:53:13.139203 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-16 00:53:13.139208 | orchestrator | Tuesday 16 September 2025 00:51:22 +0000 (0:00:01.215) 0:04:16.619 ***** 2025-09-16 00:53:13.139213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-16 00:53:13.139218 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.139223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-16 00:53:13.139228 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.139235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-16 00:53:13.139244 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.139249 | orchestrator | 2025-09-16 00:53:13.139254 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-16 00:53:13.139259 | orchestrator | Tuesday 16 September 2025 00:51:23 +0000 (0:00:01.165) 0:04:17.784 ***** 2025-09-16 00:53:13.139263 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.139268 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.139273 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.139278 | orchestrator | 2025-09-16 00:53:13.139295 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-16 00:53:13.139301 | orchestrator | Tuesday 16 September 2025 00:51:25 +0000 (0:00:01.752) 0:04:19.537 ***** 2025-09-16 00:53:13.139305 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.139311 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.139315 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.139320 | orchestrator | 2025-09-16 00:53:13.139325 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-16 00:53:13.139330 | orchestrator | Tuesday 16 September 2025 00:51:27 +0000 (0:00:02.316) 0:04:21.853 ***** 2025-09-16 00:53:13.139334 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.139339 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.139344 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.139349 | orchestrator | 2025-09-16 00:53:13.139354 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-16 00:53:13.139359 | orchestrator | Tuesday 16 September 2025 00:51:30 +0000 (0:00:02.752) 0:04:24.606 ***** 2025-09-16 00:53:13.139363 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-16 00:53:13.139368 | orchestrator | 2025-09-16 00:53:13.139373 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-16 00:53:13.139378 | orchestrator | Tuesday 16 September 2025 00:51:31 +0000 (0:00:00.853) 0:04:25.459 ***** 2025-09-16 00:53:13.139383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-16 00:53:13.139388 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.139393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-16 00:53:13.139398 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.139403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-16 00:53:13.139411 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.139416 | orchestrator | 2025-09-16 00:53:13.139421 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-16 00:53:13.139426 | orchestrator | Tuesday 16 September 2025 00:51:32 +0000 (0:00:01.265) 0:04:26.725 ***** 2025-09-16 00:53:13.139431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-16 00:53:13.139436 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.139443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-16 00:53:13.139448 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.139466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-16 00:53:13.139472 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.139477 | orchestrator | 2025-09-16 00:53:13.139481 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-16 00:53:13.139486 | orchestrator | Tuesday 16 September 2025 00:51:33 +0000 (0:00:01.251) 0:04:27.976 ***** 2025-09-16 00:53:13.139491 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.139496 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.139501 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.139505 | orchestrator | 2025-09-16 00:53:13.139510 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-16 00:53:13.139515 | orchestrator | Tuesday 16 September 2025 00:51:35 +0000 (0:00:01.513) 0:04:29.490 ***** 2025-09-16 00:53:13.139520 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.139524 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.139529 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.139534 | orchestrator | 2025-09-16 00:53:13.139539 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-16 00:53:13.139544 | orchestrator | Tuesday 16 September 2025 00:51:37 +0000 (0:00:02.402) 0:04:31.892 ***** 2025-09-16 00:53:13.139548 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.139553 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.139558 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.139562 | orchestrator | 2025-09-16 00:53:13.139567 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-16 00:53:13.139572 | orchestrator | Tuesday 16 September 2025 00:51:40 +0000 (0:00:03.131) 0:04:35.023 ***** 2025-09-16 00:53:13.139580 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.139585 | orchestrator | 2025-09-16 00:53:13.139590 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-16 00:53:13.139594 | orchestrator | Tuesday 16 September 2025 00:51:42 +0000 (0:00:01.538) 0:04:36.561 ***** 2025-09-16 00:53:13.139599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.139605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-16 00:53:13.139613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.139631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.139637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.139646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-16 00:53:13.139651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.139656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.139663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.139681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-16 00:53:13.139687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.139692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.139701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.139706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.139711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.139716 | orchestrator | 2025-09-16 00:53:13.139721 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-16 00:53:13.139726 | orchestrator | Tuesday 16 September 2025 00:51:45 +0000 (0:00:03.285) 0:04:39.847 ***** 2025-09-16 00:53:13.139745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.139751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-16 00:53:13.139774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.139779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.139784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.139789 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.139797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.139816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-16 00:53:13.139822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.139830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.139835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.139840 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.139845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.139850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-16 00:53:13.139858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.139875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-16 00:53:13.139886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-16 00:53:13.139891 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.139896 | orchestrator | 2025-09-16 00:53:13.139901 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-16 00:53:13.139905 | orchestrator | Tuesday 16 September 2025 00:51:46 +0000 (0:00:00.686) 0:04:40.533 ***** 2025-09-16 00:53:13.139910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-16 00:53:13.139915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-16 00:53:13.139921 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.139925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-16 00:53:13.139930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-16 00:53:13.139935 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.139940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-16 00:53:13.139945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-16 00:53:13.139950 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.139955 | orchestrator | 2025-09-16 00:53:13.139960 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-16 00:53:13.139964 | orchestrator | Tuesday 16 September 2025 00:51:47 +0000 (0:00:01.263) 0:04:41.796 ***** 2025-09-16 00:53:13.139969 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.139974 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.139979 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.139984 | orchestrator | 2025-09-16 00:53:13.139989 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-16 00:53:13.139993 | orchestrator | Tuesday 16 September 2025 00:51:48 +0000 (0:00:01.346) 0:04:43.143 ***** 2025-09-16 00:53:13.139998 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.140003 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.140008 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.140013 | orchestrator | 2025-09-16 00:53:13.140017 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-16 00:53:13.140022 | orchestrator | Tuesday 16 September 2025 00:51:50 +0000 (0:00:01.941) 0:04:45.085 ***** 2025-09-16 00:53:13.140027 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.140032 | orchestrator | 2025-09-16 00:53:13.140037 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-16 00:53:13.140049 | orchestrator | Tuesday 16 September 2025 00:51:52 +0000 (0:00:01.285) 0:04:46.371 ***** 2025-09-16 00:53:13.140068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-16 00:53:13.140074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-16 00:53:13.140079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-16 00:53:13.140085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-16 00:53:13.140115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-16 00:53:13.140131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-16 00:53:13.140140 | orchestrator | 2025-09-16 00:53:13.140148 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-16 00:53:13.140157 | orchestrator | Tuesday 16 September 2025 00:51:57 +0000 (0:00:05.433) 0:04:51.805 ***** 2025-09-16 00:53:13.140166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-16 00:53:13.140175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-16 00:53:13.140184 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.140201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-16 00:53:13.140230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-16 00:53:13.140237 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.140242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-16 00:53:13.140247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-16 00:53:13.140253 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.140258 | orchestrator | 2025-09-16 00:53:13.140266 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-16 00:53:13.140271 | orchestrator | Tuesday 16 September 2025 00:51:58 +0000 (0:00:00.628) 0:04:52.434 ***** 2025-09-16 00:53:13.140276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-16 00:53:13.140281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-16 00:53:13.140289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-16 00:53:13.140294 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.140299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-16 00:53:13.140316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-16 00:53:13.140322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-16 00:53:13.140327 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.140332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-16 00:53:13.140337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-16 00:53:13.140342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-16 00:53:13.140347 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.140351 | orchestrator | 2025-09-16 00:53:13.140356 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-16 00:53:13.140361 | orchestrator | Tuesday 16 September 2025 00:51:59 +0000 (0:00:00.889) 0:04:53.324 ***** 2025-09-16 00:53:13.140366 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.140371 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.140376 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.140380 | orchestrator | 2025-09-16 00:53:13.140385 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-16 00:53:13.140390 | orchestrator | Tuesday 16 September 2025 00:51:59 +0000 (0:00:00.725) 0:04:54.049 ***** 2025-09-16 00:53:13.140395 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.140399 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.140404 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.140409 | orchestrator | 2025-09-16 00:53:13.140414 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-16 00:53:13.140419 | orchestrator | Tuesday 16 September 2025 00:52:01 +0000 (0:00:01.362) 0:04:55.411 ***** 2025-09-16 00:53:13.140424 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.140428 | orchestrator | 2025-09-16 00:53:13.140433 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-16 00:53:13.140440 | orchestrator | Tuesday 16 September 2025 00:52:02 +0000 (0:00:01.388) 0:04:56.800 ***** 2025-09-16 00:53:13.140446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-16 00:53:13.140451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 00:53:13.140471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-16 00:53:13.140477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 00:53:13.140487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 00:53:13.140506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 00:53:13.140532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-16 00:53:13.140538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 00:53:13.140543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 00:53:13.140562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-16 00:53:13.140573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-16 00:53:13.140578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-16 00:53:13.140583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-16 00:53:13.140599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-16 00:53:13.140620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-16 00:53:13.140630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-16 00:53:13.140639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-16 00:53:13.140644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-16 00:53:13.140664 | orchestrator | 2025-09-16 00:53:13.140669 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-16 00:53:13.140674 | orchestrator | Tuesday 16 September 2025 00:52:06 +0000 (0:00:04.239) 0:05:01.040 ***** 2025-09-16 00:53:13.140679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-16 00:53:13.140687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 00:53:13.140692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 00:53:13.140713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-16 00:53:13.140718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-16 00:53:13.140726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-16 00:53:13.140731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 00:53:13.140736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-16 00:53:13.140807 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.140813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 00:53:13.140818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-16 00:53:13.140823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-16 00:53:13.140831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-16 00:53:13.140839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 00:53:13.140844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 00:53:13.140873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-16 00:53:13.140892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-16 00:53:13.140898 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.140906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-16 00:53:13.140911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 00:53:13.140922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-16 00:53:13.140926 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.140931 | orchestrator | 2025-09-16 00:53:13.140936 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-16 00:53:13.140941 | orchestrator | Tuesday 16 September 2025 00:52:08 +0000 (0:00:01.175) 0:05:02.215 ***** 2025-09-16 00:53:13.140946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-16 00:53:13.140951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-16 00:53:13.140957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-16 00:53:13.140965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-16 00:53:13.140970 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.140974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-16 00:53:13.140986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-16 00:53:13.140992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-16 00:53:13.140996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-16 00:53:13.141004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-16 00:53:13.141008 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.141013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-16 00:53:13.141018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-16 00:53:13.141023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-16 00:53:13.141027 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.141032 | orchestrator | 2025-09-16 00:53:13.141036 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-16 00:53:13.141041 | orchestrator | Tuesday 16 September 2025 00:52:08 +0000 (0:00:00.947) 0:05:03.163 ***** 2025-09-16 00:53:13.141046 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.141050 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.141055 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.141059 | orchestrator | 2025-09-16 00:53:13.141064 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-16 00:53:13.141068 | orchestrator | Tuesday 16 September 2025 00:52:09 +0000 (0:00:00.413) 0:05:03.576 ***** 2025-09-16 00:53:13.141073 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.141078 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.141082 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.141087 | orchestrator | 2025-09-16 00:53:13.141091 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-16 00:53:13.141096 | orchestrator | Tuesday 16 September 2025 00:52:10 +0000 (0:00:01.374) 0:05:04.950 ***** 2025-09-16 00:53:13.141100 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.141105 | orchestrator | 2025-09-16 00:53:13.141109 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-16 00:53:13.141114 | orchestrator | Tuesday 16 September 2025 00:52:12 +0000 (0:00:01.668) 0:05:06.619 ***** 2025-09-16 00:53:13.141122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-16 00:53:13.141134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-16 00:53:13.141139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-16 00:53:13.141145 | orchestrator | 2025-09-16 00:53:13.141149 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-16 00:53:13.141154 | orchestrator | Tuesday 16 September 2025 00:52:14 +0000 (0:00:02.184) 0:05:08.803 ***** 2025-09-16 00:53:13.141159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-16 00:53:13.141169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-16 00:53:13.141174 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.141179 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.141186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-16 00:53:13.141191 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.141196 | orchestrator | 2025-09-16 00:53:13.141201 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-16 00:53:13.141205 | orchestrator | Tuesday 16 September 2025 00:52:14 +0000 (0:00:00.356) 0:05:09.159 ***** 2025-09-16 00:53:13.141210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-16 00:53:13.141214 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.141219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-16 00:53:13.141223 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.141228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-16 00:53:13.141233 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.141237 | orchestrator | 2025-09-16 00:53:13.141242 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-16 00:53:13.141246 | orchestrator | Tuesday 16 September 2025 00:52:15 +0000 (0:00:00.917) 0:05:10.077 ***** 2025-09-16 00:53:13.141251 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.141255 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.141260 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.141264 | orchestrator | 2025-09-16 00:53:13.141269 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-16 00:53:13.141274 | orchestrator | Tuesday 16 September 2025 00:52:16 +0000 (0:00:00.406) 0:05:10.483 ***** 2025-09-16 00:53:13.141278 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.141283 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.141291 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.141295 | orchestrator | 2025-09-16 00:53:13.141300 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-16 00:53:13.141304 | orchestrator | Tuesday 16 September 2025 00:52:17 +0000 (0:00:01.119) 0:05:11.603 ***** 2025-09-16 00:53:13.141309 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:53:13.141313 | orchestrator | 2025-09-16 00:53:13.141318 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-16 00:53:13.141322 | orchestrator | Tuesday 16 September 2025 00:52:18 +0000 (0:00:01.547) 0:05:13.151 ***** 2025-09-16 00:53:13.141327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.141337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.141342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.141348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.141356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.141380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-16 00:53:13.141385 | orchestrator | 2025-09-16 00:53:13.141392 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-16 00:53:13.141397 | orchestrator | Tuesday 16 September 2025 00:52:24 +0000 (0:00:05.464) 0:05:18.615 ***** 2025-09-16 00:53:13.141402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.141410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.141431 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.141437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.141444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.141449 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.141457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.141462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-16 00:53:13.141470 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.141475 | orchestrator | 2025-09-16 00:53:13.141479 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-16 00:53:13.141484 | orchestrator | Tuesday 16 September 2025 00:52:25 +0000 (0:00:00.574) 0:05:19.190 ***** 2025-09-16 00:53:13.141489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-16 00:53:13.141493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-16 00:53:13.141498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-16 00:53:13.141503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-16 00:53:13.141508 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.141512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-16 00:53:13.141517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-16 00:53:13.141521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-16 00:53:13.141529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-16 00:53:13.141533 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.141538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-16 00:53:13.141545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-16 00:53:13.141549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-16 00:53:13.141554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-16 00:53:13.141559 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.141563 | orchestrator | 2025-09-16 00:53:13.141568 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-16 00:53:13.141573 | orchestrator | Tuesday 16 September 2025 00:52:26 +0000 (0:00:01.236) 0:05:20.426 ***** 2025-09-16 00:53:13.141577 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.141585 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.141590 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.141595 | orchestrator | 2025-09-16 00:53:13.141599 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-16 00:53:13.141604 | orchestrator | Tuesday 16 September 2025 00:52:27 +0000 (0:00:01.259) 0:05:21.686 ***** 2025-09-16 00:53:13.141608 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.141613 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.141617 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.141622 | orchestrator | 2025-09-16 00:53:13.141626 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-16 00:53:13.141631 | orchestrator | Tuesday 16 September 2025 00:52:29 +0000 (0:00:01.869) 0:05:23.555 ***** 2025-09-16 00:53:13.141635 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.141640 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.141644 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.141649 | orchestrator | 2025-09-16 00:53:13.141654 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-16 00:53:13.141658 | orchestrator | Tuesday 16 September 2025 00:52:29 +0000 (0:00:00.287) 0:05:23.842 ***** 2025-09-16 00:53:13.141663 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.141667 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.141672 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.141676 | orchestrator | 2025-09-16 00:53:13.141681 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-16 00:53:13.141685 | orchestrator | Tuesday 16 September 2025 00:52:29 +0000 (0:00:00.264) 0:05:24.107 ***** 2025-09-16 00:53:13.141690 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.141694 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.141699 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.141703 | orchestrator | 2025-09-16 00:53:13.141708 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-16 00:53:13.141712 | orchestrator | Tuesday 16 September 2025 00:52:30 +0000 (0:00:00.513) 0:05:24.621 ***** 2025-09-16 00:53:13.141717 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.141721 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.141726 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.141730 | orchestrator | 2025-09-16 00:53:13.141735 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-16 00:53:13.141739 | orchestrator | Tuesday 16 September 2025 00:52:30 +0000 (0:00:00.293) 0:05:24.914 ***** 2025-09-16 00:53:13.141744 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.141748 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.141753 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.141775 | orchestrator | 2025-09-16 00:53:13.141784 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-16 00:53:13.141791 | orchestrator | Tuesday 16 September 2025 00:52:31 +0000 (0:00:00.277) 0:05:25.192 ***** 2025-09-16 00:53:13.141799 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.141806 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.141812 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.141821 | orchestrator | 2025-09-16 00:53:13.141826 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-16 00:53:13.141831 | orchestrator | Tuesday 16 September 2025 00:52:31 +0000 (0:00:00.683) 0:05:25.876 ***** 2025-09-16 00:53:13.141835 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.141840 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.141844 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.141849 | orchestrator | 2025-09-16 00:53:13.141854 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-16 00:53:13.141858 | orchestrator | Tuesday 16 September 2025 00:52:32 +0000 (0:00:00.625) 0:05:26.502 ***** 2025-09-16 00:53:13.141863 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.141867 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.141876 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.141880 | orchestrator | 2025-09-16 00:53:13.141885 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-16 00:53:13.141889 | orchestrator | Tuesday 16 September 2025 00:52:32 +0000 (0:00:00.319) 0:05:26.822 ***** 2025-09-16 00:53:13.141894 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.141898 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.141906 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.141910 | orchestrator | 2025-09-16 00:53:13.141915 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-16 00:53:13.141919 | orchestrator | Tuesday 16 September 2025 00:52:33 +0000 (0:00:01.029) 0:05:27.851 ***** 2025-09-16 00:53:13.141924 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.141929 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.141933 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.141938 | orchestrator | 2025-09-16 00:53:13.141942 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-16 00:53:13.141947 | orchestrator | Tuesday 16 September 2025 00:52:34 +0000 (0:00:01.068) 0:05:28.919 ***** 2025-09-16 00:53:13.141951 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.141956 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.141963 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.141968 | orchestrator | 2025-09-16 00:53:13.141973 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-16 00:53:13.141977 | orchestrator | Tuesday 16 September 2025 00:52:35 +0000 (0:00:00.886) 0:05:29.805 ***** 2025-09-16 00:53:13.141982 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.141986 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.141991 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.141995 | orchestrator | 2025-09-16 00:53:13.142000 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-16 00:53:13.142005 | orchestrator | Tuesday 16 September 2025 00:52:40 +0000 (0:00:04.418) 0:05:34.224 ***** 2025-09-16 00:53:13.142009 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.142035 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.142040 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.142044 | orchestrator | 2025-09-16 00:53:13.142049 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-16 00:53:13.142053 | orchestrator | Tuesday 16 September 2025 00:52:42 +0000 (0:00:02.780) 0:05:37.005 ***** 2025-09-16 00:53:13.142058 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.142062 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.142067 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.142071 | orchestrator | 2025-09-16 00:53:13.142076 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-16 00:53:13.142081 | orchestrator | Tuesday 16 September 2025 00:52:56 +0000 (0:00:13.350) 0:05:50.356 ***** 2025-09-16 00:53:13.142085 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.142090 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.142094 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.142099 | orchestrator | 2025-09-16 00:53:13.142103 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-16 00:53:13.142108 | orchestrator | Tuesday 16 September 2025 00:52:57 +0000 (0:00:01.084) 0:05:51.441 ***** 2025-09-16 00:53:13.142112 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:53:13.142117 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:53:13.142121 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:53:13.142126 | orchestrator | 2025-09-16 00:53:13.142130 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-16 00:53:13.142135 | orchestrator | Tuesday 16 September 2025 00:53:06 +0000 (0:00:09.234) 0:06:00.675 ***** 2025-09-16 00:53:13.142139 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.142144 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.142148 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.142153 | orchestrator | 2025-09-16 00:53:13.142157 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-16 00:53:13.142167 | orchestrator | Tuesday 16 September 2025 00:53:06 +0000 (0:00:00.349) 0:06:01.025 ***** 2025-09-16 00:53:13.142171 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.142176 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.142180 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.142185 | orchestrator | 2025-09-16 00:53:13.142189 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-16 00:53:13.142194 | orchestrator | Tuesday 16 September 2025 00:53:07 +0000 (0:00:00.341) 0:06:01.367 ***** 2025-09-16 00:53:13.142199 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.142203 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.142208 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.142212 | orchestrator | 2025-09-16 00:53:13.142217 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-16 00:53:13.142221 | orchestrator | Tuesday 16 September 2025 00:53:07 +0000 (0:00:00.674) 0:06:02.041 ***** 2025-09-16 00:53:13.142225 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.142230 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.142234 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.142239 | orchestrator | 2025-09-16 00:53:13.142244 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-16 00:53:13.142248 | orchestrator | Tuesday 16 September 2025 00:53:08 +0000 (0:00:00.363) 0:06:02.405 ***** 2025-09-16 00:53:13.142253 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.142257 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.142262 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.142266 | orchestrator | 2025-09-16 00:53:13.142271 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-16 00:53:13.142275 | orchestrator | Tuesday 16 September 2025 00:53:08 +0000 (0:00:00.329) 0:06:02.734 ***** 2025-09-16 00:53:13.142280 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:53:13.142284 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:53:13.142289 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:53:13.142293 | orchestrator | 2025-09-16 00:53:13.142298 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-16 00:53:13.142302 | orchestrator | Tuesday 16 September 2025 00:53:08 +0000 (0:00:00.335) 0:06:03.070 ***** 2025-09-16 00:53:13.142307 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.142311 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.142316 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.142320 | orchestrator | 2025-09-16 00:53:13.142325 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-16 00:53:13.142330 | orchestrator | Tuesday 16 September 2025 00:53:10 +0000 (0:00:01.245) 0:06:04.315 ***** 2025-09-16 00:53:13.142334 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:53:13.142339 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:53:13.142343 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:53:13.142348 | orchestrator | 2025-09-16 00:53:13.142355 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:53:13.142360 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-16 00:53:13.142365 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-16 00:53:13.142369 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-16 00:53:13.142374 | orchestrator | 2025-09-16 00:53:13.142379 | orchestrator | 2025-09-16 00:53:13.142386 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:53:13.142391 | orchestrator | Tuesday 16 September 2025 00:53:10 +0000 (0:00:00.803) 0:06:05.118 ***** 2025-09-16 00:53:13.142395 | orchestrator | =============================================================================== 2025-09-16 00:53:13.142403 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.35s 2025-09-16 00:53:13.142408 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.23s 2025-09-16 00:53:13.142412 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.46s 2025-09-16 00:53:13.142417 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.43s 2025-09-16 00:53:13.142421 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.04s 2025-09-16 00:53:13.142426 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.73s 2025-09-16 00:53:13.142430 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.42s 2025-09-16 00:53:13.142435 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.24s 2025-09-16 00:53:13.142439 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.24s 2025-09-16 00:53:13.142444 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.06s 2025-09-16 00:53:13.142448 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.03s 2025-09-16 00:53:13.142453 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.01s 2025-09-16 00:53:13.142457 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.99s 2025-09-16 00:53:13.142462 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.99s 2025-09-16 00:53:13.142466 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.90s 2025-09-16 00:53:13.142471 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 3.73s 2025-09-16 00:53:13.142475 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.71s 2025-09-16 00:53:13.142480 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.71s 2025-09-16 00:53:13.142484 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.50s 2025-09-16 00:53:13.142489 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.43s 2025-09-16 00:53:13.142493 | orchestrator | 2025-09-16 00:53:13 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:13.142498 | orchestrator | 2025-09-16 00:53:13 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:13.142503 | orchestrator | 2025-09-16 00:53:13 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:16.160846 | orchestrator | 2025-09-16 00:53:16 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:16.161451 | orchestrator | 2025-09-16 00:53:16 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:16.162443 | orchestrator | 2025-09-16 00:53:16 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:16.162705 | orchestrator | 2025-09-16 00:53:16 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:19.215014 | orchestrator | 2025-09-16 00:53:19 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:19.215117 | orchestrator | 2025-09-16 00:53:19 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:19.215132 | orchestrator | 2025-09-16 00:53:19 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:19.215144 | orchestrator | 2025-09-16 00:53:19 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:22.276068 | orchestrator | 2025-09-16 00:53:22 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:22.279472 | orchestrator | 2025-09-16 00:53:22 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:22.283120 | orchestrator | 2025-09-16 00:53:22 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:22.283886 | orchestrator | 2025-09-16 00:53:22 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:25.327645 | orchestrator | 2025-09-16 00:53:25 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:25.330546 | orchestrator | 2025-09-16 00:53:25 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:25.334223 | orchestrator | 2025-09-16 00:53:25 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:25.334386 | orchestrator | 2025-09-16 00:53:25 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:28.369739 | orchestrator | 2025-09-16 00:53:28 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:28.371204 | orchestrator | 2025-09-16 00:53:28 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:28.372382 | orchestrator | 2025-09-16 00:53:28 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:28.372870 | orchestrator | 2025-09-16 00:53:28 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:31.416027 | orchestrator | 2025-09-16 00:53:31 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:31.416610 | orchestrator | 2025-09-16 00:53:31 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:31.417176 | orchestrator | 2025-09-16 00:53:31 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:31.417209 | orchestrator | 2025-09-16 00:53:31 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:34.453949 | orchestrator | 2025-09-16 00:53:34 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:34.456612 | orchestrator | 2025-09-16 00:53:34 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:34.457342 | orchestrator | 2025-09-16 00:53:34 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:34.457546 | orchestrator | 2025-09-16 00:53:34 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:37.514208 | orchestrator | 2025-09-16 00:53:37 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:37.514875 | orchestrator | 2025-09-16 00:53:37 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:37.515948 | orchestrator | 2025-09-16 00:53:37 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:37.515966 | orchestrator | 2025-09-16 00:53:37 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:40.563137 | orchestrator | 2025-09-16 00:53:40 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:40.564314 | orchestrator | 2025-09-16 00:53:40 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:40.566438 | orchestrator | 2025-09-16 00:53:40 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:40.566465 | orchestrator | 2025-09-16 00:53:40 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:43.611854 | orchestrator | 2025-09-16 00:53:43 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:43.614001 | orchestrator | 2025-09-16 00:53:43 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:43.617834 | orchestrator | 2025-09-16 00:53:43 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:43.619508 | orchestrator | 2025-09-16 00:53:43 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:46.656939 | orchestrator | 2025-09-16 00:53:46 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:46.657457 | orchestrator | 2025-09-16 00:53:46 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:46.658575 | orchestrator | 2025-09-16 00:53:46 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:46.658598 | orchestrator | 2025-09-16 00:53:46 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:49.700922 | orchestrator | 2025-09-16 00:53:49 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:49.702110 | orchestrator | 2025-09-16 00:53:49 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:49.704307 | orchestrator | 2025-09-16 00:53:49 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:49.704568 | orchestrator | 2025-09-16 00:53:49 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:52.745965 | orchestrator | 2025-09-16 00:53:52 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:52.747337 | orchestrator | 2025-09-16 00:53:52 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:52.749656 | orchestrator | 2025-09-16 00:53:52 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:52.750114 | orchestrator | 2025-09-16 00:53:52 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:55.790716 | orchestrator | 2025-09-16 00:53:55 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:55.792727 | orchestrator | 2025-09-16 00:53:55 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:55.796725 | orchestrator | 2025-09-16 00:53:55 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:55.797111 | orchestrator | 2025-09-16 00:53:55 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:53:58.836717 | orchestrator | 2025-09-16 00:53:58 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:53:58.839009 | orchestrator | 2025-09-16 00:53:58 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:53:58.841463 | orchestrator | 2025-09-16 00:53:58 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:53:58.841814 | orchestrator | 2025-09-16 00:53:58 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:01.888541 | orchestrator | 2025-09-16 00:54:01 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:01.889824 | orchestrator | 2025-09-16 00:54:01 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:01.891528 | orchestrator | 2025-09-16 00:54:01 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:01.891556 | orchestrator | 2025-09-16 00:54:01 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:04.945082 | orchestrator | 2025-09-16 00:54:04 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:04.947529 | orchestrator | 2025-09-16 00:54:04 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:04.951399 | orchestrator | 2025-09-16 00:54:04 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:04.951426 | orchestrator | 2025-09-16 00:54:04 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:07.994288 | orchestrator | 2025-09-16 00:54:07 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:07.996096 | orchestrator | 2025-09-16 00:54:07 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:07.997241 | orchestrator | 2025-09-16 00:54:07 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:07.997381 | orchestrator | 2025-09-16 00:54:07 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:11.046174 | orchestrator | 2025-09-16 00:54:11 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:11.047283 | orchestrator | 2025-09-16 00:54:11 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:11.049245 | orchestrator | 2025-09-16 00:54:11 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:11.049269 | orchestrator | 2025-09-16 00:54:11 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:14.092299 | orchestrator | 2025-09-16 00:54:14 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:14.092991 | orchestrator | 2025-09-16 00:54:14 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:14.093952 | orchestrator | 2025-09-16 00:54:14 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:14.093970 | orchestrator | 2025-09-16 00:54:14 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:17.147218 | orchestrator | 2025-09-16 00:54:17 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:17.149735 | orchestrator | 2025-09-16 00:54:17 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:17.152207 | orchestrator | 2025-09-16 00:54:17 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:17.152409 | orchestrator | 2025-09-16 00:54:17 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:20.204209 | orchestrator | 2025-09-16 00:54:20 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:20.205690 | orchestrator | 2025-09-16 00:54:20 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:20.207743 | orchestrator | 2025-09-16 00:54:20 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:20.207933 | orchestrator | 2025-09-16 00:54:20 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:23.255355 | orchestrator | 2025-09-16 00:54:23 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:23.258277 | orchestrator | 2025-09-16 00:54:23 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:23.259895 | orchestrator | 2025-09-16 00:54:23 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:23.259925 | orchestrator | 2025-09-16 00:54:23 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:26.310317 | orchestrator | 2025-09-16 00:54:26 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:26.312314 | orchestrator | 2025-09-16 00:54:26 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:26.314111 | orchestrator | 2025-09-16 00:54:26 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:26.314156 | orchestrator | 2025-09-16 00:54:26 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:29.365285 | orchestrator | 2025-09-16 00:54:29 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:29.369004 | orchestrator | 2025-09-16 00:54:29 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:29.371865 | orchestrator | 2025-09-16 00:54:29 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:29.372282 | orchestrator | 2025-09-16 00:54:29 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:32.416736 | orchestrator | 2025-09-16 00:54:32 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:32.418857 | orchestrator | 2025-09-16 00:54:32 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:32.422739 | orchestrator | 2025-09-16 00:54:32 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:32.423090 | orchestrator | 2025-09-16 00:54:32 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:35.456211 | orchestrator | 2025-09-16 00:54:35 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:35.457047 | orchestrator | 2025-09-16 00:54:35 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:35.458722 | orchestrator | 2025-09-16 00:54:35 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:35.458750 | orchestrator | 2025-09-16 00:54:35 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:38.499642 | orchestrator | 2025-09-16 00:54:38 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:38.500175 | orchestrator | 2025-09-16 00:54:38 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:38.500829 | orchestrator | 2025-09-16 00:54:38 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:38.500852 | orchestrator | 2025-09-16 00:54:38 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:41.534219 | orchestrator | 2025-09-16 00:54:41 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:41.535601 | orchestrator | 2025-09-16 00:54:41 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:41.537889 | orchestrator | 2025-09-16 00:54:41 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:41.537915 | orchestrator | 2025-09-16 00:54:41 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:44.581958 | orchestrator | 2025-09-16 00:54:44 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:44.583731 | orchestrator | 2025-09-16 00:54:44 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:44.587244 | orchestrator | 2025-09-16 00:54:44 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:44.587363 | orchestrator | 2025-09-16 00:54:44 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:47.635321 | orchestrator | 2025-09-16 00:54:47 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:47.636941 | orchestrator | 2025-09-16 00:54:47 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:47.638877 | orchestrator | 2025-09-16 00:54:47 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:47.638906 | orchestrator | 2025-09-16 00:54:47 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:50.686466 | orchestrator | 2025-09-16 00:54:50 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:50.688332 | orchestrator | 2025-09-16 00:54:50 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:50.690763 | orchestrator | 2025-09-16 00:54:50 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:50.691058 | orchestrator | 2025-09-16 00:54:50 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:53.735739 | orchestrator | 2025-09-16 00:54:53 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:53.737972 | orchestrator | 2025-09-16 00:54:53 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:53.740873 | orchestrator | 2025-09-16 00:54:53 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:53.740906 | orchestrator | 2025-09-16 00:54:53 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:56.784983 | orchestrator | 2025-09-16 00:54:56 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:56.788432 | orchestrator | 2025-09-16 00:54:56 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:56.790117 | orchestrator | 2025-09-16 00:54:56 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:56.790148 | orchestrator | 2025-09-16 00:54:56 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:54:59.836446 | orchestrator | 2025-09-16 00:54:59 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:54:59.838661 | orchestrator | 2025-09-16 00:54:59 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:54:59.839963 | orchestrator | 2025-09-16 00:54:59 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:54:59.840240 | orchestrator | 2025-09-16 00:54:59 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:02.883417 | orchestrator | 2025-09-16 00:55:02 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:55:02.885171 | orchestrator | 2025-09-16 00:55:02 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:02.887303 | orchestrator | 2025-09-16 00:55:02 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:02.887360 | orchestrator | 2025-09-16 00:55:02 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:05.934983 | orchestrator | 2025-09-16 00:55:05 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state STARTED 2025-09-16 00:55:05.936351 | orchestrator | 2025-09-16 00:55:05 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:05.938767 | orchestrator | 2025-09-16 00:55:05 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:05.939074 | orchestrator | 2025-09-16 00:55:05 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:08.991552 | orchestrator | 2025-09-16 00:55:08 | INFO  | Task e50ce0c6-3dd4-4423-a5e0-1614f4bfe1df is in state SUCCESS 2025-09-16 00:55:08.992720 | orchestrator | 2025-09-16 00:55:08.992816 | orchestrator | 2025-09-16 00:55:08.992832 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-16 00:55:08.992844 | orchestrator | 2025-09-16 00:55:08.992856 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-16 00:55:08.992867 | orchestrator | Tuesday 16 September 2025 00:44:31 +0000 (0:00:00.865) 0:00:00.865 ***** 2025-09-16 00:55:08.992879 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:08.992915 | orchestrator | 2025-09-16 00:55:08.992927 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-16 00:55:08.992988 | orchestrator | Tuesday 16 September 2025 00:44:32 +0000 (0:00:01.231) 0:00:02.096 ***** 2025-09-16 00:55:08.993000 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:08.993012 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:08.993023 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:08.993033 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:08.993044 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:08.993055 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:08.993066 | orchestrator | 2025-09-16 00:55:08.993088 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-16 00:55:08.993100 | orchestrator | Tuesday 16 September 2025 00:44:34 +0000 (0:00:01.949) 0:00:04.045 ***** 2025-09-16 00:55:08.993111 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:08.993121 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:08.993132 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:08.993143 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:08.993154 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:08.993164 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:08.993175 | orchestrator | 2025-09-16 00:55:08.993186 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-16 00:55:08.993197 | orchestrator | Tuesday 16 September 2025 00:44:35 +0000 (0:00:00.689) 0:00:04.735 ***** 2025-09-16 00:55:08.993291 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:08.993304 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:08.993317 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:08.993329 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:08.993342 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:08.993381 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:08.993393 | orchestrator | 2025-09-16 00:55:08.993406 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-16 00:55:08.993419 | orchestrator | Tuesday 16 September 2025 00:44:36 +0000 (0:00:00.934) 0:00:05.669 ***** 2025-09-16 00:55:08.993432 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:08.993444 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:08.993457 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:08.993469 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:08.993481 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:08.993494 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:08.993506 | orchestrator | 2025-09-16 00:55:08.993518 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-16 00:55:08.993532 | orchestrator | Tuesday 16 September 2025 00:44:36 +0000 (0:00:00.714) 0:00:06.384 ***** 2025-09-16 00:55:08.993544 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:08.993556 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:08.993569 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:08.993581 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:08.993593 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:08.993605 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:08.993617 | orchestrator | 2025-09-16 00:55:08.993629 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-16 00:55:08.993642 | orchestrator | Tuesday 16 September 2025 00:44:37 +0000 (0:00:00.594) 0:00:06.979 ***** 2025-09-16 00:55:08.993655 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:08.993667 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:08.993678 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:08.993688 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:08.993699 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:08.993709 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:08.993720 | orchestrator | 2025-09-16 00:55:08.993731 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-16 00:55:08.993742 | orchestrator | Tuesday 16 September 2025 00:44:38 +0000 (0:00:01.123) 0:00:08.102 ***** 2025-09-16 00:55:08.993752 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.993764 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:08.993798 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:08.993810 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:08.993821 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:08.993832 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:08.993842 | orchestrator | 2025-09-16 00:55:08.993853 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-16 00:55:08.993864 | orchestrator | Tuesday 16 September 2025 00:44:39 +0000 (0:00:00.786) 0:00:08.889 ***** 2025-09-16 00:55:08.993875 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:08.993914 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:08.993927 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:08.993938 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:08.993949 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:08.993959 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:08.993970 | orchestrator | 2025-09-16 00:55:08.993981 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-16 00:55:08.993992 | orchestrator | Tuesday 16 September 2025 00:44:40 +0000 (0:00:00.723) 0:00:09.613 ***** 2025-09-16 00:55:08.994003 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-16 00:55:08.994058 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-16 00:55:08.994073 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-16 00:55:08.994084 | orchestrator | 2025-09-16 00:55:08.994095 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-16 00:55:08.994106 | orchestrator | Tuesday 16 September 2025 00:44:40 +0000 (0:00:00.727) 0:00:10.340 ***** 2025-09-16 00:55:08.994116 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:08.994159 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:08.994243 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:08.994255 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:08.994275 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:08.994287 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:08.994298 | orchestrator | 2025-09-16 00:55:08.994323 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-16 00:55:08.994335 | orchestrator | Tuesday 16 September 2025 00:44:41 +0000 (0:00:00.705) 0:00:11.046 ***** 2025-09-16 00:55:08.994346 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-16 00:55:08.994357 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-16 00:55:08.994368 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-16 00:55:08.994378 | orchestrator | 2025-09-16 00:55:08.994389 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-16 00:55:08.994400 | orchestrator | Tuesday 16 September 2025 00:44:45 +0000 (0:00:03.709) 0:00:14.755 ***** 2025-09-16 00:55:08.994411 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-16 00:55:08.994422 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-16 00:55:08.994433 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-16 00:55:08.994450 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.994462 | orchestrator | 2025-09-16 00:55:08.994473 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-16 00:55:08.994484 | orchestrator | Tuesday 16 September 2025 00:44:46 +0000 (0:00:00.795) 0:00:15.551 ***** 2025-09-16 00:55:08.994496 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.994509 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.994529 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.994541 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.994552 | orchestrator | 2025-09-16 00:55:08.994563 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-16 00:55:08.994574 | orchestrator | Tuesday 16 September 2025 00:44:47 +0000 (0:00:00.931) 0:00:16.482 ***** 2025-09-16 00:55:08.994586 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.994599 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.994611 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.994622 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.994633 | orchestrator | 2025-09-16 00:55:08.994644 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-16 00:55:08.994655 | orchestrator | Tuesday 16 September 2025 00:44:47 +0000 (0:00:00.281) 0:00:16.764 ***** 2025-09-16 00:55:08.994674 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-16 00:44:42.690918', 'end': '2025-09-16 00:44:43.211602', 'delta': '0:00:00.520684', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.994688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-16 00:44:43.827997', 'end': '2025-09-16 00:44:44.079220', 'delta': '0:00:00.251223', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.994705 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-16 00:44:44.640375', 'end': '2025-09-16 00:44:44.929623', 'delta': '0:00:00.289248', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.994723 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.994734 | orchestrator | 2025-09-16 00:55:08.994745 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-16 00:55:08.994756 | orchestrator | Tuesday 16 September 2025 00:44:48 +0000 (0:00:00.761) 0:00:17.525 ***** 2025-09-16 00:55:08.994767 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:08.994778 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:08.994817 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:08.994828 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:08.994839 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:08.994850 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:08.994860 | orchestrator | 2025-09-16 00:55:08.994871 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-16 00:55:08.994882 | orchestrator | Tuesday 16 September 2025 00:44:50 +0000 (0:00:02.236) 0:00:19.762 ***** 2025-09-16 00:55:08.994893 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-16 00:55:08.994987 | orchestrator | 2025-09-16 00:55:08.994998 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-16 00:55:08.995009 | orchestrator | Tuesday 16 September 2025 00:44:51 +0000 (0:00:01.033) 0:00:20.795 ***** 2025-09-16 00:55:08.995020 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.995031 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:08.995042 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:08.995053 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:08.995063 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:08.995074 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:08.995085 | orchestrator | 2025-09-16 00:55:08.995096 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-16 00:55:08.995107 | orchestrator | Tuesday 16 September 2025 00:44:54 +0000 (0:00:02.745) 0:00:23.541 ***** 2025-09-16 00:55:08.995118 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.995128 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:08.995139 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:08.995150 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:08.995160 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:08.995171 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:08.995181 | orchestrator | 2025-09-16 00:55:08.995192 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-16 00:55:08.995203 | orchestrator | Tuesday 16 September 2025 00:44:56 +0000 (0:00:01.926) 0:00:25.467 ***** 2025-09-16 00:55:08.995214 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.995224 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:08.995235 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:08.995246 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:08.995257 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:08.995327 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:08.995340 | orchestrator | 2025-09-16 00:55:08.995351 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-16 00:55:08.995362 | orchestrator | Tuesday 16 September 2025 00:44:56 +0000 (0:00:00.864) 0:00:26.332 ***** 2025-09-16 00:55:08.995373 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.995383 | orchestrator | 2025-09-16 00:55:08.995394 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-16 00:55:08.995405 | orchestrator | Tuesday 16 September 2025 00:44:57 +0000 (0:00:00.122) 0:00:26.454 ***** 2025-09-16 00:55:08.995416 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.995434 | orchestrator | 2025-09-16 00:55:08.995445 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-16 00:55:08.995455 | orchestrator | Tuesday 16 September 2025 00:44:57 +0000 (0:00:00.180) 0:00:26.635 ***** 2025-09-16 00:55:08.995466 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.995477 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:08.995487 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:08.995498 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:08.995509 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:08.995520 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:08.995530 | orchestrator | 2025-09-16 00:55:08.995548 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-16 00:55:08.995559 | orchestrator | Tuesday 16 September 2025 00:44:58 +0000 (0:00:01.085) 0:00:27.720 ***** 2025-09-16 00:55:08.995570 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.995581 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:08.995592 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:08.995602 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:08.995613 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:08.995624 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:08.995634 | orchestrator | 2025-09-16 00:55:08.995645 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-16 00:55:08.995686 | orchestrator | Tuesday 16 September 2025 00:44:59 +0000 (0:00:01.366) 0:00:29.087 ***** 2025-09-16 00:55:08.995698 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.995708 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:08.995719 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:08.995730 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:08.995741 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:08.995751 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:08.995763 | orchestrator | 2025-09-16 00:55:08.995859 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-16 00:55:08.995872 | orchestrator | Tuesday 16 September 2025 00:45:00 +0000 (0:00:00.983) 0:00:30.071 ***** 2025-09-16 00:55:08.995884 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.995894 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:08.995905 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:08.995916 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:08.995927 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:08.995938 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:08.995948 | orchestrator | 2025-09-16 00:55:08.996029 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-16 00:55:08.996040 | orchestrator | Tuesday 16 September 2025 00:45:01 +0000 (0:00:01.110) 0:00:31.182 ***** 2025-09-16 00:55:08.996051 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.996062 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:08.996072 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:08.996081 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:08.996091 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:08.996100 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:08.996110 | orchestrator | 2025-09-16 00:55:08.996119 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-16 00:55:08.996129 | orchestrator | Tuesday 16 September 2025 00:45:02 +0000 (0:00:00.841) 0:00:32.024 ***** 2025-09-16 00:55:08.996139 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.996148 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:08.996158 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:08.996188 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:08.996200 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:08.996210 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:08.996220 | orchestrator | 2025-09-16 00:55:08.996278 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-16 00:55:08.996289 | orchestrator | Tuesday 16 September 2025 00:45:03 +0000 (0:00:00.818) 0:00:32.842 ***** 2025-09-16 00:55:08.996306 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.996316 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:08.996325 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:08.996335 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:08.996344 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:08.996354 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:08.996363 | orchestrator | 2025-09-16 00:55:08.996373 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-16 00:55:08.996383 | orchestrator | Tuesday 16 September 2025 00:45:04 +0000 (0:00:01.375) 0:00:34.218 ***** 2025-09-16 00:55:08.996394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8832b43a--4370--5f7f--b8ca--e1ef860202d6-osd--block--8832b43a--4370--5f7f--b8ca--e1ef860202d6', 'dm-uuid-LVM-YcgXbQSLW6T92S6r08xR6FKW11TasuSzM1boHuyKTVrqUfc58vek5nzVrYSc131l'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b409e677--b998--57d2--be40--43b65c9fb72d-osd--block--b409e677--b998--57d2--be40--43b65c9fb72d', 'dm-uuid-LVM-gbuhjycd69TX34wcCVoOmjlpPQ8wKcDDF2HTNd5TzUHszt21u4Oo8BSW9v29cmec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a154e298--15cb--5d50--9a1c--17bc1371db7e-osd--block--a154e298--15cb--5d50--9a1c--17bc1371db7e', 'dm-uuid-LVM-EFbsuN8afaIRpM6v16JYlvMAjTlWagjCgfIoPoiTnRbMaFJFK1uNEn8SJIoQO836'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56010334--63d7--5603--a2fe--432c47d6dcb8-osd--block--56010334--63d7--5603--a2fe--432c47d6dcb8', 'dm-uuid-LVM-ucs6vcOg2JldR43Dv3HJMWOQXgxk4Rjo7I7oTzMQaB9pBSV82lBHT1wVuGjA34S1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.996659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8832b43a--4370--5f7f--b8ca--e1ef860202d6-osd--block--8832b43a--4370--5f7f--b8ca--e1ef860202d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WHoXZD-QS5F-ff1Z-a1ef-ziTD-c1GW-4CB7Fq', 'scsi-0QEMU_QEMU_HARDDISK_216f9756-46fe-48b3-8a57-6cc5b7e0c275', 'scsi-SQEMU_QEMU_HARDDISK_216f9756-46fe-48b3-8a57-6cc5b7e0c275'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.996756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b409e677--b998--57d2--be40--43b65c9fb72d-osd--block--b409e677--b998--57d2--be40--43b65c9fb72d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3K9cLz-vSxn-jDgK-Vo6h-4rad-CN2R-XamGFe', 'scsi-0QEMU_QEMU_HARDDISK_ebe7fd99-ddf0-4119-8dea-cb8b427f2aed', 'scsi-SQEMU_QEMU_HARDDISK_ebe7fd99-ddf0-4119-8dea-cb8b427f2aed'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.996830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b7c66eb-e150-40bb-863f-cd4924cbb0ab', 'scsi-SQEMU_QEMU_HARDDISK_6b7c66eb-e150-40bb-863f-cd4924cbb0ab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.996858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.996874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.996891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--457b984f--2001--5589--9984--9a697803acd2-osd--block--457b984f--2001--5589--9984--9a697803acd2', 'dm-uuid-LVM-tzkPODnltvbLVlVrcBUaBpanSvFXy5Iay3kG9ArxWiFQxKflJopyLP8Gmm4Yvbsw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2877fc6--62dc--51ad--b157--4c09a4f274b5-osd--block--d2877fc6--62dc--51ad--b157--4c09a4f274b5', 'dm-uuid-LVM-7LVTsKXd7HIvwLOlwIgvnvIMJ54t1cPOgYpDah7ONpCBRQytZ43PyRVPBl38dgW5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996911 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:08.996922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.996971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a154e298--15cb--5d50--9a1c--17bc1371db7e-osd--block--a154e298--15cb--5d50--9a1c--17bc1371db7e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P3bfw4-K4Nt-fYqq-420J-zfth-TTxR-E3QAou', 'scsi-0QEMU_QEMU_HARDDISK_5c63af0b-1be6-4a9c-8f35-a4445080f1db', 'scsi-SQEMU_QEMU_HARDDISK_5c63af0b-1be6-4a9c-8f35-a4445080f1db'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.996990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--56010334--63d7--5603--a2fe--432c47d6dcb8-osd--block--56010334--63d7--5603--a2fe--432c47d6dcb8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3tif6z-fkDa-jtZO-rKIs-mW9z-sNzV-G9Mbhe', 'scsi-0QEMU_QEMU_HARDDISK_da9e83cb-2e5e-4388-ad73-1879a24665a3', 'scsi-SQEMU_QEMU_HARDDISK_da9e83cb-2e5e-4388-ad73-1879a24665a3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.997007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46481bd5-1fc4-4619-9f81-82a2d5c944be', 'scsi-SQEMU_QEMU_HARDDISK_46481bd5-1fc4-4619-9f81-82a2d5c944be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.997017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.997038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997048 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4', 'scsi-SQEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part1', 'scsi-SQEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part14', 'scsi-SQEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part15', 'scsi-SQEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part16', 'scsi-SQEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.997172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.997183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part1', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part14', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part15', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part16', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.997262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--457b984f--2001--5589--9984--9a697803acd2-osd--block--457b984f--2001--5589--9984--9a697803acd2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mgW1BQ-nXoc-c1V8-OokY-Wdcb-Y2DR-cYNYFs', 'scsi-0QEMU_QEMU_HARDDISK_a99d92e2-a7d0-4115-a3b5-db7bfa0170a9', 'scsi-SQEMU_QEMU_HARDDISK_a99d92e2-a7d0-4115-a3b5-db7bfa0170a9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.997272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d2877fc6--62dc--51ad--b157--4c09a4f274b5-osd--block--d2877fc6--62dc--51ad--b157--4c09a4f274b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TlF0kY-AYW1-VWgt-gIFx-fbP3-MThZ-R9X0sN', 'scsi-0QEMU_QEMU_HARDDISK_f8c86b93-6440-4cc6-ba3c-00ae05f2a443', 'scsi-SQEMU_QEMU_HARDDISK_f8c86b93-6440-4cc6-ba3c-00ae05f2a443'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.997283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad9de541-7002-4a51-9253-a212a9f46ca2', 'scsi-SQEMU_QEMU_HARDDISK_ad9de541-7002-4a51-9253-a212a9f46ca2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.997298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.997309 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:08.997324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4', 'scsi-SQEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part1', 'scsi-SQEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part14', 'scsi-SQEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part15', 'scsi-SQEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part16', 'scsi-SQEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.997605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.997619 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:08.997629 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:08.997638 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:08.997648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:55:08.997747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da', 'scsi-SQEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part1', 'scsi-SQEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part14', 'scsi-SQEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part15', 'scsi-SQEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part16', 'scsi-SQEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.997769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:55:08.997780 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:08.997803 | orchestrator | 2025-09-16 00:55:08.997813 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-16 00:55:08.997823 | orchestrator | Tuesday 16 September 2025 00:45:07 +0000 (0:00:02.628) 0:00:36.847 ***** 2025-09-16 00:55:08.997837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8832b43a--4370--5f7f--b8ca--e1ef860202d6-osd--block--8832b43a--4370--5f7f--b8ca--e1ef860202d6', 'dm-uuid-LVM-YcgXbQSLW6T92S6r08xR6FKW11TasuSzM1boHuyKTVrqUfc58vek5nzVrYSc131l'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.997848 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b409e677--b998--57d2--be40--43b65c9fb72d-osd--block--b409e677--b998--57d2--be40--43b65c9fb72d', 'dm-uuid-LVM-gbuhjycd69TX34wcCVoOmjlpPQ8wKcDDF2HTNd5TzUHszt21u4Oo8BSW9v29cmec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.997859 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--457b984f--2001--5589--9984--9a697803acd2-osd--block--457b984f--2001--5589--9984--9a697803acd2', 'dm-uuid-LVM-tzkPODnltvbLVlVrcBUaBpanSvFXy5Iay3kG9ArxWiFQxKflJopyLP8Gmm4Yvbsw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.997870 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a154e298--15cb--5d50--9a1c--17bc1371db7e-osd--block--a154e298--15cb--5d50--9a1c--17bc1371db7e', 'dm-uuid-LVM-EFbsuN8afaIRpM6v16JYlvMAjTlWagjCgfIoPoiTnRbMaFJFK1uNEn8SJIoQO836'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.997890 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.997906 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56010334--63d7--5603--a2fe--432c47d6dcb8-osd--block--56010334--63d7--5603--a2fe--432c47d6dcb8', 'dm-uuid-LVM-ucs6vcOg2JldR43Dv3HJMWOQXgxk4Rjo7I7oTzMQaB9pBSV82lBHT1wVuGjA34S1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.997917 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2877fc6--62dc--51ad--b157--4c09a4f274b5-osd--block--d2877fc6--62dc--51ad--b157--4c09a4f274b5', 'dm-uuid-LVM-7LVTsKXd7HIvwLOlwIgvnvIMJ54t1cPOgYpDah7ONpCBRQytZ43PyRVPBl38dgW5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.997927 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.997937 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.997948 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.997963 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.997979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.997993 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.998004 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.998047 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.998060 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.998077 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.998093 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.998104 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.998114 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.999656 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.999730 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.999747 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.999779 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.999850 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.999870 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.999882 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.999909 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.999922 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.999942 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.999954 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.999965 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:08.999981 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000014 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000026 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000044 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000072 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4', 'scsi-SQEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part1', 'scsi-SQEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part14', 'scsi-SQEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part15', 'scsi-SQEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part16', 'scsi-SQEMU_QEMU_HARDDISK_073617aa-e75d-409e-9d4b-061b932bfcf4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000087 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000106 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000118 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000142 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4', 'scsi-SQEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part1', 'scsi-SQEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part14', 'scsi-SQEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part15', 'scsi-SQEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part16', 'scsi-SQEMU_QEMU_HARDDISK_accc6646-5f95-4ba9-892c-603bcb8fd4c4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000155 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000174 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000185 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000197 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000210 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.000227 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000248 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000267 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000279 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.000295 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a154e298--15cb--5d50--9a1c--17bc1371db7e-osd--block--a154e298--15cb--5d50--9a1c--17bc1371db7e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P3bfw4-K4Nt-fYqq-420J-zfth-TTxR-E3QAou', 'scsi-0QEMU_QEMU_HARDDISK_5c63af0b-1be6-4a9c-8f35-a4445080f1db', 'scsi-SQEMU_QEMU_HARDDISK_5c63af0b-1be6-4a9c-8f35-a4445080f1db'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000308 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000326 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--56010334--63d7--5603--a2fe--432c47d6dcb8-osd--block--56010334--63d7--5603--a2fe--432c47d6dcb8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3tif6z-fkDa-jtZO-rKIs-mW9z-sNzV-G9Mbhe', 'scsi-0QEMU_QEMU_HARDDISK_da9e83cb-2e5e-4388-ad73-1879a24665a3', 'scsi-SQEMU_QEMU_HARDDISK_da9e83cb-2e5e-4388-ad73-1879a24665a3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000344 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000355 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46481bd5-1fc4-4619-9f81-82a2d5c944be', 'scsi-SQEMU_QEMU_HARDDISK_46481bd5-1fc4-4619-9f81-82a2d5c944be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000367 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000397 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da', 'scsi-SQEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part1', 'scsi-SQEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part14', 'scsi-SQEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part15', 'scsi-SQEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part16', 'scsi-SQEMU_QEMU_HARDDISK_16f6c2a1-43c7-4984-96fd-7906308a93da-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000417 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000429 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000441 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.000456 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000467 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.000479 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000497 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000515 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000526 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000538 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000562 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part1', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part14', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part15', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part16', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000581 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000593 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--457b984f--2001--5589--9984--9a697803acd2-osd--block--457b984f--2001--5589--9984--9a697803acd2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mgW1BQ-nXoc-c1V8-OokY-Wdcb-Y2DR-cYNYFs', 'scsi-0QEMU_QEMU_HARDDISK_a99d92e2-a7d0-4115-a3b5-db7bfa0170a9', 'scsi-SQEMU_QEMU_HARDDISK_a99d92e2-a7d0-4115-a3b5-db7bfa0170a9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000605 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d2877fc6--62dc--51ad--b157--4c09a4f274b5-osd--block--d2877fc6--62dc--51ad--b157--4c09a4f274b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TlF0kY-AYW1-VWgt-gIFx-fbP3-MThZ-R9X0sN', 'scsi-0QEMU_QEMU_HARDDISK_f8c86b93-6440-4cc6-ba3c-00ae05f2a443', 'scsi-SQEMU_QEMU_HARDDISK_f8c86b93-6440-4cc6-ba3c-00ae05f2a443'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000629 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000648 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8832b43a--4370--5f7f--b8ca--e1ef860202d6-osd--block--8832b43a--4370--5f7f--b8ca--e1ef860202d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WHoXZD-QS5F-ff1Z-a1ef-ziTD-c1GW-4CB7Fq', 'scsi-0QEMU_QEMU_HARDDISK_216f9756-46fe-48b3-8a57-6cc5b7e0c275', 'scsi-SQEMU_QEMU_HARDDISK_216f9756-46fe-48b3-8a57-6cc5b7e0c275'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000661 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad9de541-7002-4a51-9253-a212a9f46ca2', 'scsi-SQEMU_QEMU_HARDDISK_ad9de541-7002-4a51-9253-a212a9f46ca2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000677 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b409e677--b998--57d2--be40--43b65c9fb72d-osd--block--b409e677--b998--57d2--be40--43b65c9fb72d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3K9cLz-vSxn-jDgK-Vo6h-4rad-CN2R-XamGFe', 'scsi-0QEMU_QEMU_HARDDISK_ebe7fd99-ddf0-4119-8dea-cb8b427f2aed', 'scsi-SQEMU_QEMU_HARDDISK_ebe7fd99-ddf0-4119-8dea-cb8b427f2aed'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000701 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b7c66eb-e150-40bb-863f-cd4924cbb0ab', 'scsi-SQEMU_QEMU_HARDDISK_6b7c66eb-e150-40bb-863f-cd4924cbb0ab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000713 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000724 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.000735 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:55:09.000747 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.000759 | orchestrator | 2025-09-16 00:55:09.000771 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-16 00:55:09.000783 | orchestrator | Tuesday 16 September 2025 00:45:09 +0000 (0:00:01.947) 0:00:38.795 ***** 2025-09-16 00:55:09.000825 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.000836 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.000847 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.000858 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.000869 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.000881 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.000891 | orchestrator | 2025-09-16 00:55:09.000903 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-16 00:55:09.000914 | orchestrator | Tuesday 16 September 2025 00:45:12 +0000 (0:00:03.404) 0:00:42.199 ***** 2025-09-16 00:55:09.000925 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.000936 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.000947 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.000958 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.000969 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.000980 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.000991 | orchestrator | 2025-09-16 00:55:09.001010 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-16 00:55:09.001022 | orchestrator | Tuesday 16 September 2025 00:45:13 +0000 (0:00:00.971) 0:00:43.171 ***** 2025-09-16 00:55:09.001033 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.001049 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.001061 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.001072 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.001083 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.001093 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.001104 | orchestrator | 2025-09-16 00:55:09.001115 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-16 00:55:09.001127 | orchestrator | Tuesday 16 September 2025 00:45:14 +0000 (0:00:00.739) 0:00:43.910 ***** 2025-09-16 00:55:09.001138 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.001148 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.001159 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.001170 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.001181 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.001192 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.001202 | orchestrator | 2025-09-16 00:55:09.001213 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-16 00:55:09.001224 | orchestrator | Tuesday 16 September 2025 00:45:15 +0000 (0:00:00.796) 0:00:44.707 ***** 2025-09-16 00:55:09.001235 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.001246 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.001257 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.001268 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.001279 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.001296 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.001307 | orchestrator | 2025-09-16 00:55:09.001318 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-16 00:55:09.001330 | orchestrator | Tuesday 16 September 2025 00:45:16 +0000 (0:00:00.979) 0:00:45.687 ***** 2025-09-16 00:55:09.001341 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.001352 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.001363 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.001373 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.001384 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.001395 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.001406 | orchestrator | 2025-09-16 00:55:09.001417 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-16 00:55:09.001428 | orchestrator | Tuesday 16 September 2025 00:45:16 +0000 (0:00:00.552) 0:00:46.239 ***** 2025-09-16 00:55:09.001439 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-16 00:55:09.001450 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-16 00:55:09.001461 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-16 00:55:09.001472 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-16 00:55:09.001483 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-16 00:55:09.001494 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-16 00:55:09.001505 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-16 00:55:09.001515 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-16 00:55:09.001526 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-16 00:55:09.001537 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-16 00:55:09.001548 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-16 00:55:09.001559 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-16 00:55:09.001569 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-16 00:55:09.001580 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-16 00:55:09.001591 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-16 00:55:09.001607 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-16 00:55:09.001618 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-16 00:55:09.001629 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-16 00:55:09.001640 | orchestrator | 2025-09-16 00:55:09.001651 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-16 00:55:09.001662 | orchestrator | Tuesday 16 September 2025 00:45:21 +0000 (0:00:04.548) 0:00:50.788 ***** 2025-09-16 00:55:09.001673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-16 00:55:09.001684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-16 00:55:09.001695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-16 00:55:09.001706 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.001717 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-16 00:55:09.001728 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-16 00:55:09.001738 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-16 00:55:09.001749 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.001760 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-16 00:55:09.001771 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-16 00:55:09.001782 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-16 00:55:09.001835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-16 00:55:09.001847 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-16 00:55:09.001858 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-16 00:55:09.001869 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.001880 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-16 00:55:09.001891 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-16 00:55:09.001902 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.001913 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-16 00:55:09.001924 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.001935 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-16 00:55:09.001955 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-16 00:55:09.001975 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-16 00:55:09.002004 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.002100 | orchestrator | 2025-09-16 00:55:09.002134 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-16 00:55:09.002155 | orchestrator | Tuesday 16 September 2025 00:45:22 +0000 (0:00:00.997) 0:00:51.785 ***** 2025-09-16 00:55:09.002169 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.002180 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.002191 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.002202 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.002213 | orchestrator | 2025-09-16 00:55:09.002224 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-16 00:55:09.002235 | orchestrator | Tuesday 16 September 2025 00:45:23 +0000 (0:00:01.006) 0:00:52.791 ***** 2025-09-16 00:55:09.002246 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.002256 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.002267 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.002278 | orchestrator | 2025-09-16 00:55:09.002289 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-16 00:55:09.002300 | orchestrator | Tuesday 16 September 2025 00:45:23 +0000 (0:00:00.512) 0:00:53.304 ***** 2025-09-16 00:55:09.002319 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.002330 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.002351 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.002362 | orchestrator | 2025-09-16 00:55:09.002373 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-16 00:55:09.002384 | orchestrator | Tuesday 16 September 2025 00:45:24 +0000 (0:00:00.427) 0:00:53.731 ***** 2025-09-16 00:55:09.002394 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.002405 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.002416 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.002426 | orchestrator | 2025-09-16 00:55:09.002437 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-16 00:55:09.002448 | orchestrator | Tuesday 16 September 2025 00:45:24 +0000 (0:00:00.483) 0:00:54.215 ***** 2025-09-16 00:55:09.002458 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.002469 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.002480 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.002491 | orchestrator | 2025-09-16 00:55:09.002502 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-16 00:55:09.002512 | orchestrator | Tuesday 16 September 2025 00:45:25 +0000 (0:00:00.582) 0:00:54.797 ***** 2025-09-16 00:55:09.002523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:55:09.002534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:55:09.002544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:55:09.002555 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.002566 | orchestrator | 2025-09-16 00:55:09.002576 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-16 00:55:09.002587 | orchestrator | Tuesday 16 September 2025 00:45:25 +0000 (0:00:00.371) 0:00:55.169 ***** 2025-09-16 00:55:09.002598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:55:09.002609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:55:09.002620 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:55:09.002630 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.002641 | orchestrator | 2025-09-16 00:55:09.002652 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-16 00:55:09.002662 | orchestrator | Tuesday 16 September 2025 00:45:26 +0000 (0:00:00.474) 0:00:55.643 ***** 2025-09-16 00:55:09.002673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:55:09.002684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:55:09.002694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:55:09.002705 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.002716 | orchestrator | 2025-09-16 00:55:09.002726 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-16 00:55:09.002737 | orchestrator | Tuesday 16 September 2025 00:45:26 +0000 (0:00:00.361) 0:00:56.005 ***** 2025-09-16 00:55:09.002748 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.002759 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.002769 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.002780 | orchestrator | 2025-09-16 00:55:09.002810 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-16 00:55:09.002821 | orchestrator | Tuesday 16 September 2025 00:45:26 +0000 (0:00:00.333) 0:00:56.338 ***** 2025-09-16 00:55:09.002832 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-16 00:55:09.002842 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-16 00:55:09.002853 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-16 00:55:09.002864 | orchestrator | 2025-09-16 00:55:09.002874 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-16 00:55:09.002885 | orchestrator | Tuesday 16 September 2025 00:45:28 +0000 (0:00:01.873) 0:00:58.211 ***** 2025-09-16 00:55:09.002896 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-16 00:55:09.002908 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-16 00:55:09.002929 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-16 00:55:09.002940 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-16 00:55:09.002950 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-16 00:55:09.002961 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-16 00:55:09.002972 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-16 00:55:09.002983 | orchestrator | 2025-09-16 00:55:09.002993 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-16 00:55:09.003009 | orchestrator | Tuesday 16 September 2025 00:45:29 +0000 (0:00:01.190) 0:00:59.401 ***** 2025-09-16 00:55:09.003020 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-16 00:55:09.003031 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-16 00:55:09.003041 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-16 00:55:09.003052 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-16 00:55:09.003063 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-16 00:55:09.003074 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-16 00:55:09.003085 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-16 00:55:09.003096 | orchestrator | 2025-09-16 00:55:09.003106 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-16 00:55:09.003117 | orchestrator | Tuesday 16 September 2025 00:45:31 +0000 (0:00:01.783) 0:01:01.184 ***** 2025-09-16 00:55:09.003143 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.003156 | orchestrator | 2025-09-16 00:55:09.003167 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-16 00:55:09.003178 | orchestrator | Tuesday 16 September 2025 00:45:33 +0000 (0:00:01.459) 0:01:02.644 ***** 2025-09-16 00:55:09.003189 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.003200 | orchestrator | 2025-09-16 00:55:09.003210 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-16 00:55:09.003221 | orchestrator | Tuesday 16 September 2025 00:45:34 +0000 (0:00:01.288) 0:01:03.932 ***** 2025-09-16 00:55:09.003232 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.003243 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.003254 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.003264 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.003275 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.003286 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.003297 | orchestrator | 2025-09-16 00:55:09.003308 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-16 00:55:09.003319 | orchestrator | Tuesday 16 September 2025 00:45:35 +0000 (0:00:01.186) 0:01:05.119 ***** 2025-09-16 00:55:09.003329 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.003340 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.003351 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.003362 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.003372 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.003383 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.003394 | orchestrator | 2025-09-16 00:55:09.003404 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-16 00:55:09.003415 | orchestrator | Tuesday 16 September 2025 00:45:37 +0000 (0:00:01.536) 0:01:06.656 ***** 2025-09-16 00:55:09.003432 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.003443 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.003454 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.003465 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.003476 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.003486 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.003497 | orchestrator | 2025-09-16 00:55:09.003508 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-16 00:55:09.003519 | orchestrator | Tuesday 16 September 2025 00:45:38 +0000 (0:00:01.080) 0:01:07.736 ***** 2025-09-16 00:55:09.003529 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.003540 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.003551 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.003562 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.003572 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.003583 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.003594 | orchestrator | 2025-09-16 00:55:09.003604 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-16 00:55:09.003615 | orchestrator | Tuesday 16 September 2025 00:45:39 +0000 (0:00:00.855) 0:01:08.591 ***** 2025-09-16 00:55:09.003626 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.003637 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.003648 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.003658 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.003669 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.003680 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.003690 | orchestrator | 2025-09-16 00:55:09.003701 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-16 00:55:09.003712 | orchestrator | Tuesday 16 September 2025 00:45:40 +0000 (0:00:01.172) 0:01:09.764 ***** 2025-09-16 00:55:09.003723 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.003734 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.003744 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.003755 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.003766 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.003776 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.003800 | orchestrator | 2025-09-16 00:55:09.003812 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-16 00:55:09.003823 | orchestrator | Tuesday 16 September 2025 00:45:40 +0000 (0:00:00.555) 0:01:10.319 ***** 2025-09-16 00:55:09.003833 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.003844 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.003854 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.003865 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.003876 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.003886 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.003897 | orchestrator | 2025-09-16 00:55:09.003908 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-16 00:55:09.003923 | orchestrator | Tuesday 16 September 2025 00:45:42 +0000 (0:00:01.506) 0:01:11.826 ***** 2025-09-16 00:55:09.003934 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.003945 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.003956 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.003966 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.003977 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.003988 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.003998 | orchestrator | 2025-09-16 00:55:09.004009 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-16 00:55:09.004020 | orchestrator | Tuesday 16 September 2025 00:45:43 +0000 (0:00:01.183) 0:01:13.010 ***** 2025-09-16 00:55:09.004031 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.004041 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.004052 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.004062 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.004073 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.004089 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.004100 | orchestrator | 2025-09-16 00:55:09.004111 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-16 00:55:09.004122 | orchestrator | Tuesday 16 September 2025 00:45:44 +0000 (0:00:00.957) 0:01:13.968 ***** 2025-09-16 00:55:09.004132 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.004143 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.004160 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.004171 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.004182 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.004193 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.004204 | orchestrator | 2025-09-16 00:55:09.004215 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-16 00:55:09.004225 | orchestrator | Tuesday 16 September 2025 00:45:45 +0000 (0:00:01.022) 0:01:14.991 ***** 2025-09-16 00:55:09.004236 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.004247 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.004258 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.004268 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.004279 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.004290 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.004300 | orchestrator | 2025-09-16 00:55:09.004311 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-16 00:55:09.004322 | orchestrator | Tuesday 16 September 2025 00:45:46 +0000 (0:00:00.556) 0:01:15.547 ***** 2025-09-16 00:55:09.004333 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.004343 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.004354 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.004364 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.004375 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.004386 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.004397 | orchestrator | 2025-09-16 00:55:09.004407 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-16 00:55:09.004419 | orchestrator | Tuesday 16 September 2025 00:45:46 +0000 (0:00:00.845) 0:01:16.392 ***** 2025-09-16 00:55:09.004429 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.004440 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.004451 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.004461 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.004472 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.004483 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.004493 | orchestrator | 2025-09-16 00:55:09.004504 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-16 00:55:09.004515 | orchestrator | Tuesday 16 September 2025 00:45:47 +0000 (0:00:00.640) 0:01:17.032 ***** 2025-09-16 00:55:09.004526 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.004536 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.004547 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.004557 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.004568 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.004579 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.004590 | orchestrator | 2025-09-16 00:55:09.004600 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-16 00:55:09.004611 | orchestrator | Tuesday 16 September 2025 00:45:48 +0000 (0:00:00.726) 0:01:17.759 ***** 2025-09-16 00:55:09.004622 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.004632 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.004643 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.004654 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.004664 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.004675 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.004685 | orchestrator | 2025-09-16 00:55:09.004696 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-16 00:55:09.004707 | orchestrator | Tuesday 16 September 2025 00:45:48 +0000 (0:00:00.479) 0:01:18.238 ***** 2025-09-16 00:55:09.004723 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.004734 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.004745 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.004755 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.004766 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.004777 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.004836 | orchestrator | 2025-09-16 00:55:09.004849 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-16 00:55:09.004860 | orchestrator | Tuesday 16 September 2025 00:45:49 +0000 (0:00:00.624) 0:01:18.863 ***** 2025-09-16 00:55:09.004871 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.004881 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.004892 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.004903 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.004914 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.004925 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.004936 | orchestrator | 2025-09-16 00:55:09.004946 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-16 00:55:09.004958 | orchestrator | Tuesday 16 September 2025 00:45:49 +0000 (0:00:00.496) 0:01:19.359 ***** 2025-09-16 00:55:09.004968 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.004979 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.004990 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.005001 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.005011 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.005022 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.005033 | orchestrator | 2025-09-16 00:55:09.005047 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-16 00:55:09.005057 | orchestrator | Tuesday 16 September 2025 00:45:50 +0000 (0:00:00.667) 0:01:20.027 ***** 2025-09-16 00:55:09.005067 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.005076 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.005086 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.005095 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.005105 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.005114 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.005124 | orchestrator | 2025-09-16 00:55:09.005133 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-16 00:55:09.005143 | orchestrator | Tuesday 16 September 2025 00:45:51 +0000 (0:00:00.995) 0:01:21.022 ***** 2025-09-16 00:55:09.005153 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.005163 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.005172 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.005182 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.005191 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.005201 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.005210 | orchestrator | 2025-09-16 00:55:09.005220 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-16 00:55:09.005230 | orchestrator | Tuesday 16 September 2025 00:45:53 +0000 (0:00:01.411) 0:01:22.434 ***** 2025-09-16 00:55:09.005240 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.005249 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.005264 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.005274 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.005283 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.005293 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.005303 | orchestrator | 2025-09-16 00:55:09.005312 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-16 00:55:09.005322 | orchestrator | Tuesday 16 September 2025 00:45:55 +0000 (0:00:02.011) 0:01:24.446 ***** 2025-09-16 00:55:09.005332 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.005348 | orchestrator | 2025-09-16 00:55:09.005358 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-16 00:55:09.005367 | orchestrator | Tuesday 16 September 2025 00:45:56 +0000 (0:00:01.002) 0:01:25.449 ***** 2025-09-16 00:55:09.005377 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.005386 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.005396 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.005406 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.005415 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.005425 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.005434 | orchestrator | 2025-09-16 00:55:09.005444 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-16 00:55:09.005454 | orchestrator | Tuesday 16 September 2025 00:45:56 +0000 (0:00:00.492) 0:01:25.941 ***** 2025-09-16 00:55:09.005463 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.005473 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.005482 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.005492 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.005501 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.005511 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.005520 | orchestrator | 2025-09-16 00:55:09.005530 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-16 00:55:09.005540 | orchestrator | Tuesday 16 September 2025 00:45:57 +0000 (0:00:00.620) 0:01:26.562 ***** 2025-09-16 00:55:09.005550 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-16 00:55:09.005559 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-16 00:55:09.005569 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-16 00:55:09.005579 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-16 00:55:09.005588 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-16 00:55:09.005598 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-16 00:55:09.005607 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-16 00:55:09.005617 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-16 00:55:09.005626 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-16 00:55:09.005636 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-16 00:55:09.005646 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-16 00:55:09.005655 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-16 00:55:09.005665 | orchestrator | 2025-09-16 00:55:09.005674 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-16 00:55:09.005684 | orchestrator | Tuesday 16 September 2025 00:45:58 +0000 (0:00:01.199) 0:01:27.761 ***** 2025-09-16 00:55:09.005693 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.005703 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.005713 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.005722 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.005732 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.005741 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.005751 | orchestrator | 2025-09-16 00:55:09.005760 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-16 00:55:09.005770 | orchestrator | Tuesday 16 September 2025 00:45:59 +0000 (0:00:01.062) 0:01:28.824 ***** 2025-09-16 00:55:09.005780 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.005802 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.005812 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.005821 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.005840 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.005850 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.005860 | orchestrator | 2025-09-16 00:55:09.005870 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-16 00:55:09.005879 | orchestrator | Tuesday 16 September 2025 00:45:59 +0000 (0:00:00.531) 0:01:29.355 ***** 2025-09-16 00:55:09.005889 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.005899 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.005908 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.005918 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.005928 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.005937 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.005947 | orchestrator | 2025-09-16 00:55:09.005956 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-16 00:55:09.005966 | orchestrator | Tuesday 16 September 2025 00:46:00 +0000 (0:00:00.630) 0:01:29.986 ***** 2025-09-16 00:55:09.005976 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.005985 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.005995 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.006004 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.006014 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.006058 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.006068 | orchestrator | 2025-09-16 00:55:09.006083 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-16 00:55:09.006093 | orchestrator | Tuesday 16 September 2025 00:46:01 +0000 (0:00:00.507) 0:01:30.494 ***** 2025-09-16 00:55:09.006103 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.006113 | orchestrator | 2025-09-16 00:55:09.006123 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-16 00:55:09.006132 | orchestrator | Tuesday 16 September 2025 00:46:02 +0000 (0:00:01.018) 0:01:31.512 ***** 2025-09-16 00:55:09.006142 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.006152 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.006161 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.006171 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.006181 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.006190 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.006200 | orchestrator | 2025-09-16 00:55:09.006210 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-16 00:55:09.006220 | orchestrator | Tuesday 16 September 2025 00:47:01 +0000 (0:00:58.993) 0:02:30.505 ***** 2025-09-16 00:55:09.006229 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-16 00:55:09.006239 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-16 00:55:09.006249 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-16 00:55:09.006258 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.006268 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-16 00:55:09.006278 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-16 00:55:09.006288 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-16 00:55:09.006298 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.006307 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-16 00:55:09.006317 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-16 00:55:09.006327 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-16 00:55:09.006336 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.006346 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-16 00:55:09.006362 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-16 00:55:09.006372 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-16 00:55:09.006382 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-16 00:55:09.006391 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-16 00:55:09.006401 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-16 00:55:09.006411 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.006421 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.006430 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-16 00:55:09.006440 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-16 00:55:09.006450 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-16 00:55:09.006459 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.006469 | orchestrator | 2025-09-16 00:55:09.006479 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-16 00:55:09.006489 | orchestrator | Tuesday 16 September 2025 00:47:01 +0000 (0:00:00.718) 0:02:31.224 ***** 2025-09-16 00:55:09.006498 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.006508 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.006518 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.006527 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.006537 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.006547 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.006556 | orchestrator | 2025-09-16 00:55:09.006566 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-16 00:55:09.006576 | orchestrator | Tuesday 16 September 2025 00:47:02 +0000 (0:00:00.684) 0:02:31.909 ***** 2025-09-16 00:55:09.006585 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.006595 | orchestrator | 2025-09-16 00:55:09.006609 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-16 00:55:09.006619 | orchestrator | Tuesday 16 September 2025 00:47:02 +0000 (0:00:00.121) 0:02:32.031 ***** 2025-09-16 00:55:09.006629 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.006639 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.006648 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.006658 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.006668 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.006677 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.006687 | orchestrator | 2025-09-16 00:55:09.006696 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-16 00:55:09.006706 | orchestrator | Tuesday 16 September 2025 00:47:03 +0000 (0:00:00.592) 0:02:32.624 ***** 2025-09-16 00:55:09.006716 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.006726 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.006735 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.006745 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.006756 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.006773 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.006807 | orchestrator | 2025-09-16 00:55:09.006824 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-16 00:55:09.006840 | orchestrator | Tuesday 16 September 2025 00:47:03 +0000 (0:00:00.675) 0:02:33.299 ***** 2025-09-16 00:55:09.006870 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.006887 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.006903 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.006921 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.006939 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.006960 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.006971 | orchestrator | 2025-09-16 00:55:09.006981 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-16 00:55:09.006998 | orchestrator | Tuesday 16 September 2025 00:47:04 +0000 (0:00:00.495) 0:02:33.795 ***** 2025-09-16 00:55:09.007008 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.007017 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.007027 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.007037 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.007046 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.007056 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.007065 | orchestrator | 2025-09-16 00:55:09.007075 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-16 00:55:09.007085 | orchestrator | Tuesday 16 September 2025 00:47:06 +0000 (0:00:02.478) 0:02:36.273 ***** 2025-09-16 00:55:09.007095 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.007104 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.007114 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.007123 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.007133 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.007142 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.007152 | orchestrator | 2025-09-16 00:55:09.007162 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-16 00:55:09.007171 | orchestrator | Tuesday 16 September 2025 00:47:07 +0000 (0:00:00.521) 0:02:36.795 ***** 2025-09-16 00:55:09.007181 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.007191 | orchestrator | 2025-09-16 00:55:09.007201 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-16 00:55:09.007211 | orchestrator | Tuesday 16 September 2025 00:47:08 +0000 (0:00:01.085) 0:02:37.880 ***** 2025-09-16 00:55:09.007220 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.007230 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.007240 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.007250 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.007259 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.007269 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.007279 | orchestrator | 2025-09-16 00:55:09.007289 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-16 00:55:09.007298 | orchestrator | Tuesday 16 September 2025 00:47:09 +0000 (0:00:00.677) 0:02:38.558 ***** 2025-09-16 00:55:09.007308 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.007318 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.007327 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.007337 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.007347 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.007356 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.007366 | orchestrator | 2025-09-16 00:55:09.007376 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-16 00:55:09.007385 | orchestrator | Tuesday 16 September 2025 00:47:09 +0000 (0:00:00.627) 0:02:39.185 ***** 2025-09-16 00:55:09.007395 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.007405 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.007414 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.007424 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.007433 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.007443 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.007452 | orchestrator | 2025-09-16 00:55:09.007462 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-16 00:55:09.007472 | orchestrator | Tuesday 16 September 2025 00:47:10 +0000 (0:00:00.621) 0:02:39.806 ***** 2025-09-16 00:55:09.007481 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.007491 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.007500 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.007510 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.007525 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.007534 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.007544 | orchestrator | 2025-09-16 00:55:09.007554 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-16 00:55:09.007563 | orchestrator | Tuesday 16 September 2025 00:47:10 +0000 (0:00:00.528) 0:02:40.334 ***** 2025-09-16 00:55:09.007573 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.007582 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.007592 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.007602 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.007611 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.007626 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.007636 | orchestrator | 2025-09-16 00:55:09.007646 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-16 00:55:09.007656 | orchestrator | Tuesday 16 September 2025 00:47:11 +0000 (0:00:00.584) 0:02:40.919 ***** 2025-09-16 00:55:09.007666 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.007675 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.007685 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.007694 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.007704 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.007713 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.007723 | orchestrator | 2025-09-16 00:55:09.007733 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-16 00:55:09.007742 | orchestrator | Tuesday 16 September 2025 00:47:12 +0000 (0:00:00.704) 0:02:41.624 ***** 2025-09-16 00:55:09.007752 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.007762 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.007771 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.007781 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.007805 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.007815 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.007824 | orchestrator | 2025-09-16 00:55:09.007835 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-16 00:55:09.007851 | orchestrator | Tuesday 16 September 2025 00:47:12 +0000 (0:00:00.540) 0:02:42.165 ***** 2025-09-16 00:55:09.007861 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.007870 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.007880 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.007889 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.007899 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.007908 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.007918 | orchestrator | 2025-09-16 00:55:09.007927 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-16 00:55:09.007937 | orchestrator | Tuesday 16 September 2025 00:47:13 +0000 (0:00:00.685) 0:02:42.850 ***** 2025-09-16 00:55:09.007947 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.007956 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.007966 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.007976 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.007985 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.007995 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.008004 | orchestrator | 2025-09-16 00:55:09.008014 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-16 00:55:09.008024 | orchestrator | Tuesday 16 September 2025 00:47:14 +0000 (0:00:01.088) 0:02:43.938 ***** 2025-09-16 00:55:09.008033 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.008043 | orchestrator | 2025-09-16 00:55:09.008053 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-16 00:55:09.008062 | orchestrator | Tuesday 16 September 2025 00:47:15 +0000 (0:00:01.101) 0:02:45.040 ***** 2025-09-16 00:55:09.008081 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-16 00:55:09.008091 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-16 00:55:09.008101 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-16 00:55:09.008110 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-16 00:55:09.008120 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-16 00:55:09.008130 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-16 00:55:09.008139 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-16 00:55:09.008149 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-16 00:55:09.008158 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-16 00:55:09.008168 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-16 00:55:09.008177 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-16 00:55:09.008187 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-16 00:55:09.008196 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-16 00:55:09.008206 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-16 00:55:09.008215 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-16 00:55:09.008225 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-16 00:55:09.008234 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-16 00:55:09.008244 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-16 00:55:09.008253 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-16 00:55:09.008263 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-16 00:55:09.008272 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-16 00:55:09.008282 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-16 00:55:09.008292 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-16 00:55:09.008301 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-16 00:55:09.008311 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-16 00:55:09.008320 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-16 00:55:09.008329 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-16 00:55:09.008339 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-16 00:55:09.008348 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-16 00:55:09.008358 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-16 00:55:09.008367 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-16 00:55:09.008377 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-16 00:55:09.008390 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-16 00:55:09.008400 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-16 00:55:09.008410 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-16 00:55:09.008419 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-16 00:55:09.008429 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-16 00:55:09.008438 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-16 00:55:09.008448 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-16 00:55:09.008458 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-16 00:55:09.008468 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-16 00:55:09.008477 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-16 00:55:09.008487 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-16 00:55:09.008496 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-16 00:55:09.008513 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-16 00:55:09.008528 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-16 00:55:09.008538 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-16 00:55:09.008547 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-16 00:55:09.008557 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-16 00:55:09.008567 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-16 00:55:09.008576 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-16 00:55:09.008586 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-16 00:55:09.008595 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-16 00:55:09.008605 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-16 00:55:09.008614 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-16 00:55:09.008624 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-16 00:55:09.008633 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-16 00:55:09.008643 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-16 00:55:09.008652 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-16 00:55:09.008662 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-16 00:55:09.008671 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-16 00:55:09.008681 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-16 00:55:09.008690 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-16 00:55:09.008700 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-16 00:55:09.008709 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-16 00:55:09.008719 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-16 00:55:09.008728 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-16 00:55:09.008738 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-16 00:55:09.008747 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-16 00:55:09.008757 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-16 00:55:09.008767 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-16 00:55:09.008776 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-16 00:55:09.008825 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-16 00:55:09.008837 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-16 00:55:09.008847 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-16 00:55:09.008857 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-16 00:55:09.008867 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-16 00:55:09.008877 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-16 00:55:09.008886 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-16 00:55:09.008896 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-16 00:55:09.008906 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-16 00:55:09.008916 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-16 00:55:09.008925 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-16 00:55:09.008935 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-16 00:55:09.008952 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-16 00:55:09.008962 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-16 00:55:09.008971 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-16 00:55:09.008981 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-16 00:55:09.008990 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-16 00:55:09.009004 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-16 00:55:09.009014 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-16 00:55:09.009024 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-16 00:55:09.009034 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-16 00:55:09.009043 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-16 00:55:09.009053 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-16 00:55:09.009062 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-16 00:55:09.009072 | orchestrator | 2025-09-16 00:55:09.009082 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-16 00:55:09.009091 | orchestrator | Tuesday 16 September 2025 00:47:22 +0000 (0:00:06.870) 0:02:51.911 ***** 2025-09-16 00:55:09.009101 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.009111 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.009120 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.009130 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.009140 | orchestrator | 2025-09-16 00:55:09.009155 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-16 00:55:09.009165 | orchestrator | Tuesday 16 September 2025 00:47:23 +0000 (0:00:00.930) 0:02:52.842 ***** 2025-09-16 00:55:09.009175 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-16 00:55:09.009185 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-16 00:55:09.009195 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-16 00:55:09.009205 | orchestrator | 2025-09-16 00:55:09.009214 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-16 00:55:09.009224 | orchestrator | Tuesday 16 September 2025 00:47:24 +0000 (0:00:00.804) 0:02:53.646 ***** 2025-09-16 00:55:09.009234 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-16 00:55:09.009244 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-16 00:55:09.009254 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-16 00:55:09.009263 | orchestrator | 2025-09-16 00:55:09.009273 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-16 00:55:09.009283 | orchestrator | Tuesday 16 September 2025 00:47:25 +0000 (0:00:01.588) 0:02:55.235 ***** 2025-09-16 00:55:09.009292 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.009302 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.009312 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.009321 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.009331 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.009341 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.009349 | orchestrator | 2025-09-16 00:55:09.009357 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-16 00:55:09.009365 | orchestrator | Tuesday 16 September 2025 00:47:26 +0000 (0:00:00.578) 0:02:55.813 ***** 2025-09-16 00:55:09.009381 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.009389 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.009397 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.009405 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.009413 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.009421 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.009428 | orchestrator | 2025-09-16 00:55:09.009436 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-16 00:55:09.009444 | orchestrator | Tuesday 16 September 2025 00:47:27 +0000 (0:00:00.872) 0:02:56.685 ***** 2025-09-16 00:55:09.009452 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.009460 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.009468 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.009475 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.009483 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.009491 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.009499 | orchestrator | 2025-09-16 00:55:09.009507 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-16 00:55:09.009515 | orchestrator | Tuesday 16 September 2025 00:47:27 +0000 (0:00:00.595) 0:02:57.280 ***** 2025-09-16 00:55:09.009523 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.009530 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.009538 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.009546 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.009554 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.009561 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.009569 | orchestrator | 2025-09-16 00:55:09.009577 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-16 00:55:09.009585 | orchestrator | Tuesday 16 September 2025 00:47:28 +0000 (0:00:00.979) 0:02:58.260 ***** 2025-09-16 00:55:09.009593 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.009601 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.009609 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.009616 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.009624 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.009632 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.009640 | orchestrator | 2025-09-16 00:55:09.009648 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-16 00:55:09.009659 | orchestrator | Tuesday 16 September 2025 00:47:29 +0000 (0:00:01.133) 0:02:59.393 ***** 2025-09-16 00:55:09.009668 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.009675 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.009683 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.009691 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.009699 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.009707 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.009715 | orchestrator | 2025-09-16 00:55:09.009722 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-16 00:55:09.009730 | orchestrator | Tuesday 16 September 2025 00:47:30 +0000 (0:00:00.582) 0:02:59.975 ***** 2025-09-16 00:55:09.009738 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.009746 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.009754 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.009762 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.009769 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.009777 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.009797 | orchestrator | 2025-09-16 00:55:09.009805 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-16 00:55:09.009813 | orchestrator | Tuesday 16 September 2025 00:47:31 +0000 (0:00:00.650) 0:03:00.626 ***** 2025-09-16 00:55:09.009821 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.009839 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.009848 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.009856 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.009864 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.009872 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.009880 | orchestrator | 2025-09-16 00:55:09.009887 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-16 00:55:09.009895 | orchestrator | Tuesday 16 September 2025 00:47:31 +0000 (0:00:00.483) 0:03:01.110 ***** 2025-09-16 00:55:09.009903 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.009911 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.009919 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.009927 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.009935 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.009943 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.009951 | orchestrator | 2025-09-16 00:55:09.009958 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-16 00:55:09.009966 | orchestrator | Tuesday 16 September 2025 00:47:34 +0000 (0:00:02.989) 0:03:04.100 ***** 2025-09-16 00:55:09.009974 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.009982 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.009990 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.009998 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.010006 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.010014 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.010087 | orchestrator | 2025-09-16 00:55:09.010096 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-16 00:55:09.010104 | orchestrator | Tuesday 16 September 2025 00:47:35 +0000 (0:00:00.556) 0:03:04.656 ***** 2025-09-16 00:55:09.010112 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.010120 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.010128 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.010136 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.010144 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.010152 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.010160 | orchestrator | 2025-09-16 00:55:09.010168 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-16 00:55:09.010176 | orchestrator | Tuesday 16 September 2025 00:47:36 +0000 (0:00:00.824) 0:03:05.481 ***** 2025-09-16 00:55:09.010184 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.010192 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.010200 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.010208 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.010215 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.010223 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.010231 | orchestrator | 2025-09-16 00:55:09.010239 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-16 00:55:09.010247 | orchestrator | Tuesday 16 September 2025 00:47:36 +0000 (0:00:00.792) 0:03:06.274 ***** 2025-09-16 00:55:09.010255 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-16 00:55:09.010263 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-16 00:55:09.010271 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-16 00:55:09.010279 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.010287 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.010295 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.010303 | orchestrator | 2025-09-16 00:55:09.010311 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-16 00:55:09.010319 | orchestrator | Tuesday 16 September 2025 00:47:37 +0000 (0:00:00.747) 0:03:07.021 ***** 2025-09-16 00:55:09.010334 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-16 00:55:09.010343 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-16 00:55:09.010356 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-16 00:55:09.010364 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-16 00:55:09.010400 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-16 00:55:09.010409 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-16 00:55:09.010417 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.010425 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.010433 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.010441 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.010449 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.010457 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.010464 | orchestrator | 2025-09-16 00:55:09.010472 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-16 00:55:09.010480 | orchestrator | Tuesday 16 September 2025 00:47:38 +0000 (0:00:00.690) 0:03:07.711 ***** 2025-09-16 00:55:09.010488 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.010496 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.010503 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.010511 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.010519 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.010527 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.010535 | orchestrator | 2025-09-16 00:55:09.010543 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-16 00:55:09.010550 | orchestrator | Tuesday 16 September 2025 00:47:39 +0000 (0:00:00.725) 0:03:08.436 ***** 2025-09-16 00:55:09.010558 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.010566 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.010574 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.010582 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.010590 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.010598 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.010605 | orchestrator | 2025-09-16 00:55:09.010614 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-16 00:55:09.010622 | orchestrator | Tuesday 16 September 2025 00:47:39 +0000 (0:00:00.556) 0:03:08.993 ***** 2025-09-16 00:55:09.010634 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.010642 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.010650 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.010657 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.010665 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.010673 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.010681 | orchestrator | 2025-09-16 00:55:09.010689 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-16 00:55:09.010696 | orchestrator | Tuesday 16 September 2025 00:47:40 +0000 (0:00:00.778) 0:03:09.772 ***** 2025-09-16 00:55:09.010704 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.010712 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.010720 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.010727 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.010735 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.010743 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.010750 | orchestrator | 2025-09-16 00:55:09.010758 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-16 00:55:09.010766 | orchestrator | Tuesday 16 September 2025 00:47:41 +0000 (0:00:00.770) 0:03:10.542 ***** 2025-09-16 00:55:09.010774 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.010782 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.010802 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.010810 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.010817 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.010825 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.010833 | orchestrator | 2025-09-16 00:55:09.010841 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-16 00:55:09.010849 | orchestrator | Tuesday 16 September 2025 00:47:42 +0000 (0:00:00.992) 0:03:11.535 ***** 2025-09-16 00:55:09.010857 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.010865 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.010873 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.010881 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.010889 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.010897 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.010905 | orchestrator | 2025-09-16 00:55:09.010913 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-16 00:55:09.010921 | orchestrator | Tuesday 16 September 2025 00:47:42 +0000 (0:00:00.532) 0:03:12.068 ***** 2025-09-16 00:55:09.010929 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:55:09.010940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:55:09.010949 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:55:09.010957 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.010965 | orchestrator | 2025-09-16 00:55:09.010973 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-16 00:55:09.010980 | orchestrator | Tuesday 16 September 2025 00:47:43 +0000 (0:00:00.477) 0:03:12.546 ***** 2025-09-16 00:55:09.010988 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:55:09.010996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:55:09.011004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:55:09.011012 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.011020 | orchestrator | 2025-09-16 00:55:09.011028 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-16 00:55:09.011036 | orchestrator | Tuesday 16 September 2025 00:47:43 +0000 (0:00:00.442) 0:03:12.989 ***** 2025-09-16 00:55:09.011044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:55:09.011052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:55:09.011082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:55:09.011092 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.011106 | orchestrator | 2025-09-16 00:55:09.011114 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-16 00:55:09.011122 | orchestrator | Tuesday 16 September 2025 00:47:44 +0000 (0:00:00.637) 0:03:13.627 ***** 2025-09-16 00:55:09.011130 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.011138 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.011146 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.011153 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.011161 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.011169 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.011177 | orchestrator | 2025-09-16 00:55:09.011185 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-16 00:55:09.011193 | orchestrator | Tuesday 16 September 2025 00:47:44 +0000 (0:00:00.624) 0:03:14.251 ***** 2025-09-16 00:55:09.011201 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-16 00:55:09.011209 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-16 00:55:09.011216 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-16 00:55:09.011224 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-16 00:55:09.011232 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.011240 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-16 00:55:09.011247 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.011255 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-16 00:55:09.011263 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.011271 | orchestrator | 2025-09-16 00:55:09.011279 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-16 00:55:09.011286 | orchestrator | Tuesday 16 September 2025 00:47:46 +0000 (0:00:01.586) 0:03:15.838 ***** 2025-09-16 00:55:09.011294 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.011302 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.011310 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.011317 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.011325 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.011333 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.011340 | orchestrator | 2025-09-16 00:55:09.011348 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-16 00:55:09.011356 | orchestrator | Tuesday 16 September 2025 00:47:49 +0000 (0:00:02.817) 0:03:18.656 ***** 2025-09-16 00:55:09.011364 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.011371 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.011379 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.011387 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.011394 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.011402 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.011410 | orchestrator | 2025-09-16 00:55:09.011417 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-16 00:55:09.011425 | orchestrator | Tuesday 16 September 2025 00:47:50 +0000 (0:00:01.661) 0:03:20.317 ***** 2025-09-16 00:55:09.011433 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.011441 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.011449 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.011457 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.011465 | orchestrator | 2025-09-16 00:55:09.011472 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-16 00:55:09.011480 | orchestrator | Tuesday 16 September 2025 00:47:51 +0000 (0:00:00.901) 0:03:21.218 ***** 2025-09-16 00:55:09.011488 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.011496 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.011504 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.011515 | orchestrator | 2025-09-16 00:55:09.011529 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-16 00:55:09.011543 | orchestrator | Tuesday 16 September 2025 00:47:52 +0000 (0:00:00.382) 0:03:21.601 ***** 2025-09-16 00:55:09.011559 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.011567 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.011575 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.011582 | orchestrator | 2025-09-16 00:55:09.011590 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-16 00:55:09.011598 | orchestrator | Tuesday 16 September 2025 00:47:53 +0000 (0:00:01.147) 0:03:22.749 ***** 2025-09-16 00:55:09.011606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-16 00:55:09.011614 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-16 00:55:09.011622 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-16 00:55:09.011629 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.011637 | orchestrator | 2025-09-16 00:55:09.011645 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-16 00:55:09.011656 | orchestrator | Tuesday 16 September 2025 00:47:54 +0000 (0:00:00.988) 0:03:23.737 ***** 2025-09-16 00:55:09.011664 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.011672 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.011680 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.011688 | orchestrator | 2025-09-16 00:55:09.011696 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-16 00:55:09.011704 | orchestrator | Tuesday 16 September 2025 00:47:54 +0000 (0:00:00.474) 0:03:24.211 ***** 2025-09-16 00:55:09.011711 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.011719 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.011727 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.011735 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.011743 | orchestrator | 2025-09-16 00:55:09.011751 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-16 00:55:09.011758 | orchestrator | Tuesday 16 September 2025 00:47:55 +0000 (0:00:01.144) 0:03:25.356 ***** 2025-09-16 00:55:09.011766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:55:09.011774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:55:09.011782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:55:09.011927 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.011941 | orchestrator | 2025-09-16 00:55:09.011949 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-16 00:55:09.011957 | orchestrator | Tuesday 16 September 2025 00:47:56 +0000 (0:00:00.368) 0:03:25.724 ***** 2025-09-16 00:55:09.011966 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.011973 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.011981 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.011989 | orchestrator | 2025-09-16 00:55:09.011997 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-16 00:55:09.012005 | orchestrator | Tuesday 16 September 2025 00:47:56 +0000 (0:00:00.561) 0:03:26.285 ***** 2025-09-16 00:55:09.012013 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.012021 | orchestrator | 2025-09-16 00:55:09.012029 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-16 00:55:09.012036 | orchestrator | Tuesday 16 September 2025 00:47:57 +0000 (0:00:00.244) 0:03:26.530 ***** 2025-09-16 00:55:09.012044 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.012052 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.012060 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.012068 | orchestrator | 2025-09-16 00:55:09.012076 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-16 00:55:09.012084 | orchestrator | Tuesday 16 September 2025 00:47:57 +0000 (0:00:00.327) 0:03:26.857 ***** 2025-09-16 00:55:09.012091 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.012099 | orchestrator | 2025-09-16 00:55:09.012107 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-16 00:55:09.012139 | orchestrator | Tuesday 16 September 2025 00:47:57 +0000 (0:00:00.220) 0:03:27.077 ***** 2025-09-16 00:55:09.012154 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.012163 | orchestrator | 2025-09-16 00:55:09.012171 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-16 00:55:09.012178 | orchestrator | Tuesday 16 September 2025 00:47:57 +0000 (0:00:00.217) 0:03:27.295 ***** 2025-09-16 00:55:09.012186 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.012194 | orchestrator | 2025-09-16 00:55:09.012202 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-16 00:55:09.012210 | orchestrator | Tuesday 16 September 2025 00:47:57 +0000 (0:00:00.120) 0:03:27.416 ***** 2025-09-16 00:55:09.012217 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.012225 | orchestrator | 2025-09-16 00:55:09.012233 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-16 00:55:09.012241 | orchestrator | Tuesday 16 September 2025 00:47:58 +0000 (0:00:00.251) 0:03:27.668 ***** 2025-09-16 00:55:09.012249 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.012256 | orchestrator | 2025-09-16 00:55:09.012264 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-16 00:55:09.012272 | orchestrator | Tuesday 16 September 2025 00:47:58 +0000 (0:00:00.268) 0:03:27.936 ***** 2025-09-16 00:55:09.012280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:55:09.012288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:55:09.012296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:55:09.012304 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.012312 | orchestrator | 2025-09-16 00:55:09.012319 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-16 00:55:09.012327 | orchestrator | Tuesday 16 September 2025 00:47:59 +0000 (0:00:00.757) 0:03:28.693 ***** 2025-09-16 00:55:09.012335 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.012343 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.012351 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.012359 | orchestrator | 2025-09-16 00:55:09.012366 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-16 00:55:09.012373 | orchestrator | Tuesday 16 September 2025 00:47:59 +0000 (0:00:00.684) 0:03:29.378 ***** 2025-09-16 00:55:09.012380 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.012386 | orchestrator | 2025-09-16 00:55:09.012393 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-16 00:55:09.012400 | orchestrator | Tuesday 16 September 2025 00:48:00 +0000 (0:00:00.286) 0:03:29.664 ***** 2025-09-16 00:55:09.012406 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.012413 | orchestrator | 2025-09-16 00:55:09.012419 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-16 00:55:09.012426 | orchestrator | Tuesday 16 September 2025 00:48:00 +0000 (0:00:00.220) 0:03:29.884 ***** 2025-09-16 00:55:09.012433 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.012439 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.012446 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.012457 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.012464 | orchestrator | 2025-09-16 00:55:09.012470 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-16 00:55:09.012477 | orchestrator | Tuesday 16 September 2025 00:48:01 +0000 (0:00:00.850) 0:03:30.735 ***** 2025-09-16 00:55:09.012484 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.012490 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.012497 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.012504 | orchestrator | 2025-09-16 00:55:09.012510 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-16 00:55:09.012517 | orchestrator | Tuesday 16 September 2025 00:48:01 +0000 (0:00:00.281) 0:03:31.016 ***** 2025-09-16 00:55:09.012527 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.012534 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.012541 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.012547 | orchestrator | 2025-09-16 00:55:09.012554 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-16 00:55:09.012561 | orchestrator | Tuesday 16 September 2025 00:48:02 +0000 (0:00:01.163) 0:03:32.180 ***** 2025-09-16 00:55:09.012567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:55:09.012574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:55:09.012600 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:55:09.012608 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.012614 | orchestrator | 2025-09-16 00:55:09.012621 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-16 00:55:09.012628 | orchestrator | Tuesday 16 September 2025 00:48:03 +0000 (0:00:00.613) 0:03:32.793 ***** 2025-09-16 00:55:09.012634 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.012641 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.012648 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.012654 | orchestrator | 2025-09-16 00:55:09.012661 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-16 00:55:09.012668 | orchestrator | Tuesday 16 September 2025 00:48:03 +0000 (0:00:00.365) 0:03:33.159 ***** 2025-09-16 00:55:09.012675 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.012681 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.012688 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.012695 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.012702 | orchestrator | 2025-09-16 00:55:09.012708 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-16 00:55:09.012715 | orchestrator | Tuesday 16 September 2025 00:48:04 +0000 (0:00:00.889) 0:03:34.049 ***** 2025-09-16 00:55:09.012722 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.012728 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.012735 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.012742 | orchestrator | 2025-09-16 00:55:09.012748 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-16 00:55:09.012755 | orchestrator | Tuesday 16 September 2025 00:48:04 +0000 (0:00:00.297) 0:03:34.346 ***** 2025-09-16 00:55:09.012762 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.012769 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.012775 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.012782 | orchestrator | 2025-09-16 00:55:09.012809 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-16 00:55:09.012820 | orchestrator | Tuesday 16 September 2025 00:48:06 +0000 (0:00:01.614) 0:03:35.960 ***** 2025-09-16 00:55:09.012832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:55:09.012840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:55:09.012847 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:55:09.012853 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.012860 | orchestrator | 2025-09-16 00:55:09.012867 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-16 00:55:09.012874 | orchestrator | Tuesday 16 September 2025 00:48:07 +0000 (0:00:00.619) 0:03:36.580 ***** 2025-09-16 00:55:09.012881 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.012887 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.012894 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.012901 | orchestrator | 2025-09-16 00:55:09.012908 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-16 00:55:09.012915 | orchestrator | Tuesday 16 September 2025 00:48:07 +0000 (0:00:00.484) 0:03:37.064 ***** 2025-09-16 00:55:09.012921 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.012933 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.012940 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.012946 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.012953 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.012960 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.012967 | orchestrator | 2025-09-16 00:55:09.012973 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-16 00:55:09.012980 | orchestrator | Tuesday 16 September 2025 00:48:08 +0000 (0:00:00.737) 0:03:37.801 ***** 2025-09-16 00:55:09.012987 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.012994 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.013000 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.013007 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-1, testbed-node-2, testbed-node-0 2025-09-16 00:55:09.013014 | orchestrator | 2025-09-16 00:55:09.013021 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-16 00:55:09.013028 | orchestrator | Tuesday 16 September 2025 00:48:09 +0000 (0:00:01.375) 0:03:39.177 ***** 2025-09-16 00:55:09.013034 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.013041 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.013048 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.013055 | orchestrator | 2025-09-16 00:55:09.013061 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-16 00:55:09.013068 | orchestrator | Tuesday 16 September 2025 00:48:10 +0000 (0:00:00.311) 0:03:39.489 ***** 2025-09-16 00:55:09.013075 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.013085 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.013092 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.013099 | orchestrator | 2025-09-16 00:55:09.013106 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-16 00:55:09.013113 | orchestrator | Tuesday 16 September 2025 00:48:11 +0000 (0:00:01.761) 0:03:41.250 ***** 2025-09-16 00:55:09.013119 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-16 00:55:09.013126 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-16 00:55:09.013133 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-16 00:55:09.013140 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.013147 | orchestrator | 2025-09-16 00:55:09.013154 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-16 00:55:09.013160 | orchestrator | Tuesday 16 September 2025 00:48:12 +0000 (0:00:00.527) 0:03:41.778 ***** 2025-09-16 00:55:09.013167 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.013174 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.013181 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.013187 | orchestrator | 2025-09-16 00:55:09.013194 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-16 00:55:09.013201 | orchestrator | 2025-09-16 00:55:09.013208 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-16 00:55:09.013236 | orchestrator | Tuesday 16 September 2025 00:48:12 +0000 (0:00:00.485) 0:03:42.263 ***** 2025-09-16 00:55:09.013244 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.013251 | orchestrator | 2025-09-16 00:55:09.013258 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-16 00:55:09.013264 | orchestrator | Tuesday 16 September 2025 00:48:13 +0000 (0:00:00.537) 0:03:42.801 ***** 2025-09-16 00:55:09.013271 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.013278 | orchestrator | 2025-09-16 00:55:09.013284 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-16 00:55:09.013291 | orchestrator | Tuesday 16 September 2025 00:48:13 +0000 (0:00:00.515) 0:03:43.317 ***** 2025-09-16 00:55:09.013302 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.013309 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.013316 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.013323 | orchestrator | 2025-09-16 00:55:09.013329 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-16 00:55:09.013336 | orchestrator | Tuesday 16 September 2025 00:48:14 +0000 (0:00:00.722) 0:03:44.039 ***** 2025-09-16 00:55:09.013343 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.013350 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.013356 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.013363 | orchestrator | 2025-09-16 00:55:09.013370 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-16 00:55:09.013376 | orchestrator | Tuesday 16 September 2025 00:48:15 +0000 (0:00:00.394) 0:03:44.434 ***** 2025-09-16 00:55:09.013383 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.013389 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.013396 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.013403 | orchestrator | 2025-09-16 00:55:09.013410 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-16 00:55:09.013416 | orchestrator | Tuesday 16 September 2025 00:48:15 +0000 (0:00:00.559) 0:03:44.993 ***** 2025-09-16 00:55:09.013423 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.013429 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.013436 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.013443 | orchestrator | 2025-09-16 00:55:09.013449 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-16 00:55:09.013456 | orchestrator | Tuesday 16 September 2025 00:48:15 +0000 (0:00:00.211) 0:03:45.204 ***** 2025-09-16 00:55:09.013463 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.013469 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.013476 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.013482 | orchestrator | 2025-09-16 00:55:09.013489 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-16 00:55:09.013496 | orchestrator | Tuesday 16 September 2025 00:48:16 +0000 (0:00:00.633) 0:03:45.837 ***** 2025-09-16 00:55:09.013502 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.013509 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.013515 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.013522 | orchestrator | 2025-09-16 00:55:09.013529 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-16 00:55:09.013535 | orchestrator | Tuesday 16 September 2025 00:48:16 +0000 (0:00:00.323) 0:03:46.160 ***** 2025-09-16 00:55:09.013542 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.013549 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.013555 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.013562 | orchestrator | 2025-09-16 00:55:09.013569 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-16 00:55:09.013575 | orchestrator | Tuesday 16 September 2025 00:48:17 +0000 (0:00:00.439) 0:03:46.600 ***** 2025-09-16 00:55:09.013582 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.013588 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.013595 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.013602 | orchestrator | 2025-09-16 00:55:09.013608 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-16 00:55:09.013615 | orchestrator | Tuesday 16 September 2025 00:48:17 +0000 (0:00:00.700) 0:03:47.300 ***** 2025-09-16 00:55:09.013622 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.013628 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.013635 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.013642 | orchestrator | 2025-09-16 00:55:09.013648 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-16 00:55:09.013655 | orchestrator | Tuesday 16 September 2025 00:48:18 +0000 (0:00:00.776) 0:03:48.076 ***** 2025-09-16 00:55:09.013662 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.013668 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.013679 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.013685 | orchestrator | 2025-09-16 00:55:09.013695 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-16 00:55:09.013702 | orchestrator | Tuesday 16 September 2025 00:48:19 +0000 (0:00:00.420) 0:03:48.497 ***** 2025-09-16 00:55:09.013709 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.013715 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.013722 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.013729 | orchestrator | 2025-09-16 00:55:09.013735 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-16 00:55:09.013742 | orchestrator | Tuesday 16 September 2025 00:48:19 +0000 (0:00:00.475) 0:03:48.973 ***** 2025-09-16 00:55:09.013749 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.013755 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.013762 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.013769 | orchestrator | 2025-09-16 00:55:09.013775 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-16 00:55:09.013782 | orchestrator | Tuesday 16 September 2025 00:48:19 +0000 (0:00:00.355) 0:03:49.329 ***** 2025-09-16 00:55:09.013801 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.013808 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.013814 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.013821 | orchestrator | 2025-09-16 00:55:09.013828 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-16 00:55:09.013853 | orchestrator | Tuesday 16 September 2025 00:48:20 +0000 (0:00:00.266) 0:03:49.595 ***** 2025-09-16 00:55:09.013860 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.013867 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.013874 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.013881 | orchestrator | 2025-09-16 00:55:09.013888 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-16 00:55:09.013895 | orchestrator | Tuesday 16 September 2025 00:48:20 +0000 (0:00:00.352) 0:03:49.947 ***** 2025-09-16 00:55:09.013901 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.013908 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.013915 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.013922 | orchestrator | 2025-09-16 00:55:09.013928 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-16 00:55:09.013935 | orchestrator | Tuesday 16 September 2025 00:48:21 +0000 (0:00:00.537) 0:03:50.484 ***** 2025-09-16 00:55:09.013942 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.013949 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.013956 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.013962 | orchestrator | 2025-09-16 00:55:09.013969 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-16 00:55:09.013976 | orchestrator | Tuesday 16 September 2025 00:48:21 +0000 (0:00:00.283) 0:03:50.768 ***** 2025-09-16 00:55:09.013982 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.013989 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.013996 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.014002 | orchestrator | 2025-09-16 00:55:09.014009 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-16 00:55:09.014034 | orchestrator | Tuesday 16 September 2025 00:48:21 +0000 (0:00:00.498) 0:03:51.266 ***** 2025-09-16 00:55:09.014042 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.014049 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.014055 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.014062 | orchestrator | 2025-09-16 00:55:09.014069 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-16 00:55:09.014075 | orchestrator | Tuesday 16 September 2025 00:48:22 +0000 (0:00:00.348) 0:03:51.614 ***** 2025-09-16 00:55:09.014082 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.014089 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.014095 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.014102 | orchestrator | 2025-09-16 00:55:09.014113 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-16 00:55:09.014120 | orchestrator | Tuesday 16 September 2025 00:48:22 +0000 (0:00:00.641) 0:03:52.256 ***** 2025-09-16 00:55:09.014127 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.014133 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.014140 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.014147 | orchestrator | 2025-09-16 00:55:09.014153 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-16 00:55:09.014160 | orchestrator | Tuesday 16 September 2025 00:48:23 +0000 (0:00:00.396) 0:03:52.653 ***** 2025-09-16 00:55:09.014167 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.014173 | orchestrator | 2025-09-16 00:55:09.014180 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-16 00:55:09.014187 | orchestrator | Tuesday 16 September 2025 00:48:23 +0000 (0:00:00.568) 0:03:53.221 ***** 2025-09-16 00:55:09.014193 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.014200 | orchestrator | 2025-09-16 00:55:09.014207 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-16 00:55:09.014213 | orchestrator | Tuesday 16 September 2025 00:48:24 +0000 (0:00:00.324) 0:03:53.546 ***** 2025-09-16 00:55:09.014220 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-16 00:55:09.014227 | orchestrator | 2025-09-16 00:55:09.014234 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-16 00:55:09.014240 | orchestrator | Tuesday 16 September 2025 00:48:24 +0000 (0:00:00.885) 0:03:54.431 ***** 2025-09-16 00:55:09.014247 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.014253 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.014260 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.014267 | orchestrator | 2025-09-16 00:55:09.014274 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-16 00:55:09.014280 | orchestrator | Tuesday 16 September 2025 00:48:25 +0000 (0:00:00.289) 0:03:54.721 ***** 2025-09-16 00:55:09.014287 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.014294 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.014300 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.014307 | orchestrator | 2025-09-16 00:55:09.014314 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-16 00:55:09.014321 | orchestrator | Tuesday 16 September 2025 00:48:25 +0000 (0:00:00.271) 0:03:54.992 ***** 2025-09-16 00:55:09.014327 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.014337 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.014344 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.014351 | orchestrator | 2025-09-16 00:55:09.014357 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-16 00:55:09.014364 | orchestrator | Tuesday 16 September 2025 00:48:26 +0000 (0:00:01.202) 0:03:56.195 ***** 2025-09-16 00:55:09.014371 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.014377 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.014384 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.014390 | orchestrator | 2025-09-16 00:55:09.014397 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-16 00:55:09.014404 | orchestrator | Tuesday 16 September 2025 00:48:27 +0000 (0:00:01.031) 0:03:57.227 ***** 2025-09-16 00:55:09.014410 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.014417 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.014424 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.014430 | orchestrator | 2025-09-16 00:55:09.014437 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-16 00:55:09.014444 | orchestrator | Tuesday 16 September 2025 00:48:28 +0000 (0:00:00.628) 0:03:57.855 ***** 2025-09-16 00:55:09.014450 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.014457 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.014464 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.014475 | orchestrator | 2025-09-16 00:55:09.014500 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-16 00:55:09.014508 | orchestrator | Tuesday 16 September 2025 00:48:29 +0000 (0:00:00.590) 0:03:58.446 ***** 2025-09-16 00:55:09.014515 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.014522 | orchestrator | 2025-09-16 00:55:09.014528 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-16 00:55:09.014535 | orchestrator | Tuesday 16 September 2025 00:48:30 +0000 (0:00:01.206) 0:03:59.652 ***** 2025-09-16 00:55:09.014542 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.014548 | orchestrator | 2025-09-16 00:55:09.014555 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-16 00:55:09.014561 | orchestrator | Tuesday 16 September 2025 00:48:30 +0000 (0:00:00.702) 0:04:00.355 ***** 2025-09-16 00:55:09.014568 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:55:09.014574 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-16 00:55:09.014581 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:55:09.014588 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-16 00:55:09.014595 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-16 00:55:09.014601 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-16 00:55:09.014608 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-16 00:55:09.014614 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-16 00:55:09.014621 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-16 00:55:09.014628 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2025-09-16 00:55:09.014634 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-16 00:55:09.014641 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-16 00:55:09.014648 | orchestrator | 2025-09-16 00:55:09.014654 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-16 00:55:09.014661 | orchestrator | Tuesday 16 September 2025 00:48:34 +0000 (0:00:03.344) 0:04:03.700 ***** 2025-09-16 00:55:09.014667 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.014674 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.014681 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.014688 | orchestrator | 2025-09-16 00:55:09.014694 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-16 00:55:09.014701 | orchestrator | Tuesday 16 September 2025 00:48:35 +0000 (0:00:01.372) 0:04:05.073 ***** 2025-09-16 00:55:09.014708 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.014714 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.014721 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.014728 | orchestrator | 2025-09-16 00:55:09.014734 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-16 00:55:09.014741 | orchestrator | Tuesday 16 September 2025 00:48:35 +0000 (0:00:00.314) 0:04:05.388 ***** 2025-09-16 00:55:09.014748 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.014754 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.014761 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.014768 | orchestrator | 2025-09-16 00:55:09.014775 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-16 00:55:09.014781 | orchestrator | Tuesday 16 September 2025 00:48:36 +0000 (0:00:00.316) 0:04:05.704 ***** 2025-09-16 00:55:09.014805 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.014812 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.014818 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.014825 | orchestrator | 2025-09-16 00:55:09.014832 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-16 00:55:09.014838 | orchestrator | Tuesday 16 September 2025 00:48:37 +0000 (0:00:01.585) 0:04:07.290 ***** 2025-09-16 00:55:09.014845 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.014856 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.014863 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.014869 | orchestrator | 2025-09-16 00:55:09.014876 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-16 00:55:09.014883 | orchestrator | Tuesday 16 September 2025 00:48:39 +0000 (0:00:01.521) 0:04:08.812 ***** 2025-09-16 00:55:09.014889 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.014896 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.014903 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.014909 | orchestrator | 2025-09-16 00:55:09.014916 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-16 00:55:09.014923 | orchestrator | Tuesday 16 September 2025 00:48:39 +0000 (0:00:00.299) 0:04:09.112 ***** 2025-09-16 00:55:09.014933 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.014940 | orchestrator | 2025-09-16 00:55:09.014946 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-16 00:55:09.014953 | orchestrator | Tuesday 16 September 2025 00:48:40 +0000 (0:00:00.588) 0:04:09.701 ***** 2025-09-16 00:55:09.014960 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.014966 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.014973 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.014980 | orchestrator | 2025-09-16 00:55:09.014986 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-16 00:55:09.014993 | orchestrator | Tuesday 16 September 2025 00:48:40 +0000 (0:00:00.537) 0:04:10.238 ***** 2025-09-16 00:55:09.015000 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.015007 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.015013 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.015020 | orchestrator | 2025-09-16 00:55:09.015026 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-16 00:55:09.015033 | orchestrator | Tuesday 16 September 2025 00:48:41 +0000 (0:00:00.311) 0:04:10.550 ***** 2025-09-16 00:55:09.015040 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.015047 | orchestrator | 2025-09-16 00:55:09.015072 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-16 00:55:09.015081 | orchestrator | Tuesday 16 September 2025 00:48:41 +0000 (0:00:00.486) 0:04:11.037 ***** 2025-09-16 00:55:09.015087 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.015094 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.015101 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.015107 | orchestrator | 2025-09-16 00:55:09.015114 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-16 00:55:09.015121 | orchestrator | Tuesday 16 September 2025 00:48:43 +0000 (0:00:02.126) 0:04:13.163 ***** 2025-09-16 00:55:09.015127 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.015134 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.015141 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.015147 | orchestrator | 2025-09-16 00:55:09.015154 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-16 00:55:09.015161 | orchestrator | Tuesday 16 September 2025 00:48:44 +0000 (0:00:01.241) 0:04:14.405 ***** 2025-09-16 00:55:09.015167 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.015174 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.015180 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.015187 | orchestrator | 2025-09-16 00:55:09.015194 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-16 00:55:09.015200 | orchestrator | Tuesday 16 September 2025 00:48:47 +0000 (0:00:02.083) 0:04:16.488 ***** 2025-09-16 00:55:09.015207 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.015213 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.015220 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.015227 | orchestrator | 2025-09-16 00:55:09.015237 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-16 00:55:09.015244 | orchestrator | Tuesday 16 September 2025 00:48:49 +0000 (0:00:02.243) 0:04:18.732 ***** 2025-09-16 00:55:09.015251 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.015257 | orchestrator | 2025-09-16 00:55:09.015264 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-16 00:55:09.015271 | orchestrator | Tuesday 16 September 2025 00:48:50 +0000 (0:00:00.846) 0:04:19.579 ***** 2025-09-16 00:55:09.015277 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.015284 | orchestrator | 2025-09-16 00:55:09.015291 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-16 00:55:09.015297 | orchestrator | Tuesday 16 September 2025 00:48:51 +0000 (0:00:01.257) 0:04:20.836 ***** 2025-09-16 00:55:09.015304 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.015311 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.015317 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.015324 | orchestrator | 2025-09-16 00:55:09.015331 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-16 00:55:09.015338 | orchestrator | Tuesday 16 September 2025 00:49:01 +0000 (0:00:09.809) 0:04:30.645 ***** 2025-09-16 00:55:09.015344 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.015351 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.015358 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.015364 | orchestrator | 2025-09-16 00:55:09.015371 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-16 00:55:09.015377 | orchestrator | Tuesday 16 September 2025 00:49:01 +0000 (0:00:00.319) 0:04:30.965 ***** 2025-09-16 00:55:09.015385 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__15db1dbd5eec2920214b94699ccfe2baa2fa60ea'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-16 00:55:09.015393 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__15db1dbd5eec2920214b94699ccfe2baa2fa60ea'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-16 00:55:09.015404 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__15db1dbd5eec2920214b94699ccfe2baa2fa60ea'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-16 00:55:09.015412 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__15db1dbd5eec2920214b94699ccfe2baa2fa60ea'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-16 00:55:09.015437 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__15db1dbd5eec2920214b94699ccfe2baa2fa60ea'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-16 00:55:09.015446 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__15db1dbd5eec2920214b94699ccfe2baa2fa60ea'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__15db1dbd5eec2920214b94699ccfe2baa2fa60ea'}])  2025-09-16 00:55:09.015461 | orchestrator | 2025-09-16 00:55:09.015468 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-16 00:55:09.015474 | orchestrator | Tuesday 16 September 2025 00:49:16 +0000 (0:00:14.476) 0:04:45.442 ***** 2025-09-16 00:55:09.015481 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.015488 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.015495 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.015501 | orchestrator | 2025-09-16 00:55:09.015508 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-16 00:55:09.015515 | orchestrator | Tuesday 16 September 2025 00:49:16 +0000 (0:00:00.382) 0:04:45.824 ***** 2025-09-16 00:55:09.015521 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.015528 | orchestrator | 2025-09-16 00:55:09.015535 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-16 00:55:09.015541 | orchestrator | Tuesday 16 September 2025 00:49:17 +0000 (0:00:00.764) 0:04:46.589 ***** 2025-09-16 00:55:09.015548 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.015555 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.015561 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.015568 | orchestrator | 2025-09-16 00:55:09.015575 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-16 00:55:09.015581 | orchestrator | Tuesday 16 September 2025 00:49:17 +0000 (0:00:00.320) 0:04:46.910 ***** 2025-09-16 00:55:09.015588 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.015595 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.015602 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.015608 | orchestrator | 2025-09-16 00:55:09.015615 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-16 00:55:09.015622 | orchestrator | Tuesday 16 September 2025 00:49:17 +0000 (0:00:00.349) 0:04:47.259 ***** 2025-09-16 00:55:09.015628 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-16 00:55:09.015635 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-16 00:55:09.015642 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-16 00:55:09.015649 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.015655 | orchestrator | 2025-09-16 00:55:09.015662 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-16 00:55:09.015669 | orchestrator | Tuesday 16 September 2025 00:49:18 +0000 (0:00:00.574) 0:04:47.833 ***** 2025-09-16 00:55:09.015675 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.015682 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.015689 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.015695 | orchestrator | 2025-09-16 00:55:09.015702 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-16 00:55:09.015709 | orchestrator | 2025-09-16 00:55:09.015715 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-16 00:55:09.015722 | orchestrator | Tuesday 16 September 2025 00:49:19 +0000 (0:00:00.755) 0:04:48.589 ***** 2025-09-16 00:55:09.015729 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.015736 | orchestrator | 2025-09-16 00:55:09.015742 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-16 00:55:09.015749 | orchestrator | Tuesday 16 September 2025 00:49:19 +0000 (0:00:00.492) 0:04:49.082 ***** 2025-09-16 00:55:09.015755 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.015762 | orchestrator | 2025-09-16 00:55:09.015769 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-16 00:55:09.015779 | orchestrator | Tuesday 16 September 2025 00:49:20 +0000 (0:00:00.543) 0:04:49.626 ***** 2025-09-16 00:55:09.015815 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.015823 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.015830 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.015836 | orchestrator | 2025-09-16 00:55:09.015847 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-16 00:55:09.015854 | orchestrator | Tuesday 16 September 2025 00:49:21 +0000 (0:00:00.933) 0:04:50.560 ***** 2025-09-16 00:55:09.015860 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.015867 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.015874 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.015881 | orchestrator | 2025-09-16 00:55:09.015887 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-16 00:55:09.015894 | orchestrator | Tuesday 16 September 2025 00:49:21 +0000 (0:00:00.290) 0:04:50.851 ***** 2025-09-16 00:55:09.015901 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.015907 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.015914 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.015921 | orchestrator | 2025-09-16 00:55:09.015927 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-16 00:55:09.015934 | orchestrator | Tuesday 16 September 2025 00:49:21 +0000 (0:00:00.289) 0:04:51.141 ***** 2025-09-16 00:55:09.015941 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.015947 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.015954 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.015961 | orchestrator | 2025-09-16 00:55:09.015967 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-16 00:55:09.015994 | orchestrator | Tuesday 16 September 2025 00:49:22 +0000 (0:00:00.294) 0:04:51.435 ***** 2025-09-16 00:55:09.016002 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.016009 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.016016 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.016022 | orchestrator | 2025-09-16 00:55:09.016029 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-16 00:55:09.016036 | orchestrator | Tuesday 16 September 2025 00:49:22 +0000 (0:00:00.950) 0:04:52.386 ***** 2025-09-16 00:55:09.016042 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.016049 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.016055 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.016062 | orchestrator | 2025-09-16 00:55:09.016069 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-16 00:55:09.016075 | orchestrator | Tuesday 16 September 2025 00:49:23 +0000 (0:00:00.328) 0:04:52.715 ***** 2025-09-16 00:55:09.016082 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.016089 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.016095 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.016102 | orchestrator | 2025-09-16 00:55:09.016109 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-16 00:55:09.016115 | orchestrator | Tuesday 16 September 2025 00:49:23 +0000 (0:00:00.296) 0:04:53.011 ***** 2025-09-16 00:55:09.016122 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.016128 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.016135 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.016142 | orchestrator | 2025-09-16 00:55:09.016148 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-16 00:55:09.016155 | orchestrator | Tuesday 16 September 2025 00:49:24 +0000 (0:00:00.714) 0:04:53.725 ***** 2025-09-16 00:55:09.016161 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.016168 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.016175 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.016181 | orchestrator | 2025-09-16 00:55:09.016188 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-16 00:55:09.016195 | orchestrator | Tuesday 16 September 2025 00:49:25 +0000 (0:00:00.968) 0:04:54.694 ***** 2025-09-16 00:55:09.016206 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.016213 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.016220 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.016226 | orchestrator | 2025-09-16 00:55:09.016233 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-16 00:55:09.016240 | orchestrator | Tuesday 16 September 2025 00:49:25 +0000 (0:00:00.294) 0:04:54.989 ***** 2025-09-16 00:55:09.016246 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.016253 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.016260 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.016266 | orchestrator | 2025-09-16 00:55:09.016273 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-16 00:55:09.016280 | orchestrator | Tuesday 16 September 2025 00:49:25 +0000 (0:00:00.377) 0:04:55.366 ***** 2025-09-16 00:55:09.016286 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.016293 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.016300 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.016306 | orchestrator | 2025-09-16 00:55:09.016313 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-16 00:55:09.016320 | orchestrator | Tuesday 16 September 2025 00:49:26 +0000 (0:00:00.287) 0:04:55.653 ***** 2025-09-16 00:55:09.016326 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.016333 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.016340 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.016346 | orchestrator | 2025-09-16 00:55:09.016353 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-16 00:55:09.016360 | orchestrator | Tuesday 16 September 2025 00:49:26 +0000 (0:00:00.544) 0:04:56.198 ***** 2025-09-16 00:55:09.016367 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.016373 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.016380 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.016386 | orchestrator | 2025-09-16 00:55:09.016393 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-16 00:55:09.016400 | orchestrator | Tuesday 16 September 2025 00:49:27 +0000 (0:00:00.353) 0:04:56.551 ***** 2025-09-16 00:55:09.016406 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.016413 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.016420 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.016426 | orchestrator | 2025-09-16 00:55:09.016433 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-16 00:55:09.016440 | orchestrator | Tuesday 16 September 2025 00:49:27 +0000 (0:00:00.314) 0:04:56.865 ***** 2025-09-16 00:55:09.016446 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.016453 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.016460 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.016466 | orchestrator | 2025-09-16 00:55:09.016476 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-16 00:55:09.016483 | orchestrator | Tuesday 16 September 2025 00:49:27 +0000 (0:00:00.306) 0:04:57.172 ***** 2025-09-16 00:55:09.016489 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.016496 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.016503 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.016509 | orchestrator | 2025-09-16 00:55:09.016516 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-16 00:55:09.016523 | orchestrator | Tuesday 16 September 2025 00:49:28 +0000 (0:00:00.298) 0:04:57.470 ***** 2025-09-16 00:55:09.016529 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.016536 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.016543 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.016549 | orchestrator | 2025-09-16 00:55:09.016556 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-16 00:55:09.016563 | orchestrator | Tuesday 16 September 2025 00:49:28 +0000 (0:00:00.574) 0:04:58.044 ***** 2025-09-16 00:55:09.016569 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.016576 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.016586 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.016593 | orchestrator | 2025-09-16 00:55:09.016599 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-16 00:55:09.016606 | orchestrator | Tuesday 16 September 2025 00:49:29 +0000 (0:00:00.511) 0:04:58.556 ***** 2025-09-16 00:55:09.016629 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-16 00:55:09.016637 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-16 00:55:09.016644 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-16 00:55:09.016651 | orchestrator | 2025-09-16 00:55:09.016658 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-16 00:55:09.016664 | orchestrator | Tuesday 16 September 2025 00:49:30 +0000 (0:00:00.952) 0:04:59.508 ***** 2025-09-16 00:55:09.016671 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.016678 | orchestrator | 2025-09-16 00:55:09.016684 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-16 00:55:09.016691 | orchestrator | Tuesday 16 September 2025 00:49:31 +0000 (0:00:00.978) 0:05:00.487 ***** 2025-09-16 00:55:09.016697 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.016704 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.016711 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.016718 | orchestrator | 2025-09-16 00:55:09.016724 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-16 00:55:09.016731 | orchestrator | Tuesday 16 September 2025 00:49:31 +0000 (0:00:00.670) 0:05:01.158 ***** 2025-09-16 00:55:09.016737 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.016744 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.016751 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.016758 | orchestrator | 2025-09-16 00:55:09.016764 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-16 00:55:09.016771 | orchestrator | Tuesday 16 September 2025 00:49:32 +0000 (0:00:00.308) 0:05:01.466 ***** 2025-09-16 00:55:09.016777 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-16 00:55:09.016811 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-16 00:55:09.016819 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-16 00:55:09.016826 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-16 00:55:09.016833 | orchestrator | 2025-09-16 00:55:09.016840 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-16 00:55:09.016846 | orchestrator | Tuesday 16 September 2025 00:49:42 +0000 (0:00:10.893) 0:05:12.360 ***** 2025-09-16 00:55:09.016853 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.016860 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.016867 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.016873 | orchestrator | 2025-09-16 00:55:09.016880 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-16 00:55:09.016887 | orchestrator | Tuesday 16 September 2025 00:49:43 +0000 (0:00:00.582) 0:05:12.943 ***** 2025-09-16 00:55:09.016893 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-16 00:55:09.016900 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-16 00:55:09.016907 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-16 00:55:09.016914 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-16 00:55:09.016921 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:55:09.016928 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:55:09.016935 | orchestrator | 2025-09-16 00:55:09.016941 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-16 00:55:09.016948 | orchestrator | Tuesday 16 September 2025 00:49:45 +0000 (0:00:02.328) 0:05:15.272 ***** 2025-09-16 00:55:09.016955 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-16 00:55:09.016966 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-16 00:55:09.016973 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-16 00:55:09.016980 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-16 00:55:09.016987 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-16 00:55:09.016994 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-16 00:55:09.017000 | orchestrator | 2025-09-16 00:55:09.017007 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-16 00:55:09.017014 | orchestrator | Tuesday 16 September 2025 00:49:47 +0000 (0:00:01.214) 0:05:16.487 ***** 2025-09-16 00:55:09.017021 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.017027 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.017034 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.017041 | orchestrator | 2025-09-16 00:55:09.017048 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-16 00:55:09.017054 | orchestrator | Tuesday 16 September 2025 00:49:47 +0000 (0:00:00.696) 0:05:17.183 ***** 2025-09-16 00:55:09.017061 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.017071 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.017078 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.017085 | orchestrator | 2025-09-16 00:55:09.017092 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-16 00:55:09.017099 | orchestrator | Tuesday 16 September 2025 00:49:48 +0000 (0:00:00.533) 0:05:17.717 ***** 2025-09-16 00:55:09.017105 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.017112 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.017119 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.017125 | orchestrator | 2025-09-16 00:55:09.017132 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-16 00:55:09.017139 | orchestrator | Tuesday 16 September 2025 00:49:48 +0000 (0:00:00.305) 0:05:18.022 ***** 2025-09-16 00:55:09.017146 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.017152 | orchestrator | 2025-09-16 00:55:09.017159 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-16 00:55:09.017166 | orchestrator | Tuesday 16 September 2025 00:49:49 +0000 (0:00:00.524) 0:05:18.547 ***** 2025-09-16 00:55:09.017172 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.017179 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.017186 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.017193 | orchestrator | 2025-09-16 00:55:09.017219 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-16 00:55:09.017227 | orchestrator | Tuesday 16 September 2025 00:49:49 +0000 (0:00:00.305) 0:05:18.852 ***** 2025-09-16 00:55:09.017234 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.017241 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.017247 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.017254 | orchestrator | 2025-09-16 00:55:09.017260 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-16 00:55:09.017267 | orchestrator | Tuesday 16 September 2025 00:49:49 +0000 (0:00:00.555) 0:05:19.408 ***** 2025-09-16 00:55:09.017274 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.017281 | orchestrator | 2025-09-16 00:55:09.017287 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-16 00:55:09.017294 | orchestrator | Tuesday 16 September 2025 00:49:50 +0000 (0:00:00.527) 0:05:19.935 ***** 2025-09-16 00:55:09.017300 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.017307 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.017314 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.017320 | orchestrator | 2025-09-16 00:55:09.017327 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-16 00:55:09.017333 | orchestrator | Tuesday 16 September 2025 00:49:51 +0000 (0:00:01.096) 0:05:21.031 ***** 2025-09-16 00:55:09.017344 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.017351 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.017357 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.017364 | orchestrator | 2025-09-16 00:55:09.017371 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-16 00:55:09.017377 | orchestrator | Tuesday 16 September 2025 00:49:52 +0000 (0:00:01.324) 0:05:22.356 ***** 2025-09-16 00:55:09.017384 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.017390 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.017397 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.017403 | orchestrator | 2025-09-16 00:55:09.017410 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-16 00:55:09.017417 | orchestrator | Tuesday 16 September 2025 00:49:54 +0000 (0:00:01.676) 0:05:24.033 ***** 2025-09-16 00:55:09.017423 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.017430 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.017437 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.017443 | orchestrator | 2025-09-16 00:55:09.017450 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-16 00:55:09.017456 | orchestrator | Tuesday 16 September 2025 00:49:56 +0000 (0:00:01.818) 0:05:25.851 ***** 2025-09-16 00:55:09.017463 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.017469 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.017476 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-16 00:55:09.017483 | orchestrator | 2025-09-16 00:55:09.017489 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-16 00:55:09.017496 | orchestrator | Tuesday 16 September 2025 00:49:56 +0000 (0:00:00.397) 0:05:26.249 ***** 2025-09-16 00:55:09.017503 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-16 00:55:09.017509 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-16 00:55:09.017516 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-16 00:55:09.017523 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-16 00:55:09.017529 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-16 00:55:09.017536 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-16 00:55:09.017543 | orchestrator | 2025-09-16 00:55:09.017550 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-16 00:55:09.017556 | orchestrator | Tuesday 16 September 2025 00:50:27 +0000 (0:00:30.804) 0:05:57.053 ***** 2025-09-16 00:55:09.017563 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-16 00:55:09.017570 | orchestrator | 2025-09-16 00:55:09.017576 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-16 00:55:09.017583 | orchestrator | Tuesday 16 September 2025 00:50:28 +0000 (0:00:01.339) 0:05:58.392 ***** 2025-09-16 00:55:09.017589 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.017596 | orchestrator | 2025-09-16 00:55:09.017606 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-16 00:55:09.017612 | orchestrator | Tuesday 16 September 2025 00:50:29 +0000 (0:00:00.322) 0:05:58.715 ***** 2025-09-16 00:55:09.017619 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.017626 | orchestrator | 2025-09-16 00:55:09.017632 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-16 00:55:09.017639 | orchestrator | Tuesday 16 September 2025 00:50:29 +0000 (0:00:00.151) 0:05:58.866 ***** 2025-09-16 00:55:09.017646 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-16 00:55:09.017652 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-16 00:55:09.017662 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-16 00:55:09.017669 | orchestrator | 2025-09-16 00:55:09.017676 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-16 00:55:09.017682 | orchestrator | Tuesday 16 September 2025 00:50:35 +0000 (0:00:06.462) 0:06:05.329 ***** 2025-09-16 00:55:09.017689 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-16 00:55:09.017723 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-16 00:55:09.017737 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-16 00:55:09.017748 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-16 00:55:09.017755 | orchestrator | 2025-09-16 00:55:09.017762 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-16 00:55:09.017769 | orchestrator | Tuesday 16 September 2025 00:50:40 +0000 (0:00:04.938) 0:06:10.267 ***** 2025-09-16 00:55:09.017775 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.017782 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.017801 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.017808 | orchestrator | 2025-09-16 00:55:09.017814 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-16 00:55:09.017821 | orchestrator | Tuesday 16 September 2025 00:50:41 +0000 (0:00:01.008) 0:06:11.276 ***** 2025-09-16 00:55:09.017828 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.017835 | orchestrator | 2025-09-16 00:55:09.017842 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-16 00:55:09.017848 | orchestrator | Tuesday 16 September 2025 00:50:42 +0000 (0:00:00.537) 0:06:11.813 ***** 2025-09-16 00:55:09.017855 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.017862 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.017868 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.017875 | orchestrator | 2025-09-16 00:55:09.017882 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-16 00:55:09.017888 | orchestrator | Tuesday 16 September 2025 00:50:42 +0000 (0:00:00.283) 0:06:12.097 ***** 2025-09-16 00:55:09.017895 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.017902 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.017909 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.017915 | orchestrator | 2025-09-16 00:55:09.017922 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-16 00:55:09.017929 | orchestrator | Tuesday 16 September 2025 00:50:44 +0000 (0:00:01.386) 0:06:13.484 ***** 2025-09-16 00:55:09.017935 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-16 00:55:09.017942 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-16 00:55:09.017949 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-16 00:55:09.017956 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.017962 | orchestrator | 2025-09-16 00:55:09.017969 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-16 00:55:09.017976 | orchestrator | Tuesday 16 September 2025 00:50:44 +0000 (0:00:00.595) 0:06:14.080 ***** 2025-09-16 00:55:09.017982 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.017989 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.017996 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.018003 | orchestrator | 2025-09-16 00:55:09.018009 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-16 00:55:09.018031 | orchestrator | 2025-09-16 00:55:09.018039 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-16 00:55:09.018046 | orchestrator | Tuesday 16 September 2025 00:50:45 +0000 (0:00:00.536) 0:06:14.617 ***** 2025-09-16 00:55:09.018052 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.018064 | orchestrator | 2025-09-16 00:55:09.018071 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-16 00:55:09.018078 | orchestrator | Tuesday 16 September 2025 00:50:45 +0000 (0:00:00.727) 0:06:15.344 ***** 2025-09-16 00:55:09.018084 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.018091 | orchestrator | 2025-09-16 00:55:09.018098 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-16 00:55:09.018104 | orchestrator | Tuesday 16 September 2025 00:50:46 +0000 (0:00:00.513) 0:06:15.858 ***** 2025-09-16 00:55:09.018111 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.018118 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.018124 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.018131 | orchestrator | 2025-09-16 00:55:09.018138 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-16 00:55:09.018144 | orchestrator | Tuesday 16 September 2025 00:50:46 +0000 (0:00:00.270) 0:06:16.128 ***** 2025-09-16 00:55:09.018151 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.018158 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.018164 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.018171 | orchestrator | 2025-09-16 00:55:09.018183 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-16 00:55:09.018190 | orchestrator | Tuesday 16 September 2025 00:50:47 +0000 (0:00:00.994) 0:06:17.122 ***** 2025-09-16 00:55:09.018196 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.018203 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.018210 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.018216 | orchestrator | 2025-09-16 00:55:09.018224 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-16 00:55:09.018232 | orchestrator | Tuesday 16 September 2025 00:50:48 +0000 (0:00:00.692) 0:06:17.815 ***** 2025-09-16 00:55:09.018240 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.018248 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.018256 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.018263 | orchestrator | 2025-09-16 00:55:09.018271 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-16 00:55:09.018279 | orchestrator | Tuesday 16 September 2025 00:50:49 +0000 (0:00:00.711) 0:06:18.527 ***** 2025-09-16 00:55:09.018287 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.018295 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.018303 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.018311 | orchestrator | 2025-09-16 00:55:09.018319 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-16 00:55:09.018327 | orchestrator | Tuesday 16 September 2025 00:50:49 +0000 (0:00:00.286) 0:06:18.814 ***** 2025-09-16 00:55:09.018359 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.018368 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.018376 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.018384 | orchestrator | 2025-09-16 00:55:09.018392 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-16 00:55:09.018400 | orchestrator | Tuesday 16 September 2025 00:50:49 +0000 (0:00:00.518) 0:06:19.332 ***** 2025-09-16 00:55:09.018408 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.018416 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.018424 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.018432 | orchestrator | 2025-09-16 00:55:09.018439 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-16 00:55:09.018447 | orchestrator | Tuesday 16 September 2025 00:50:50 +0000 (0:00:00.320) 0:06:19.653 ***** 2025-09-16 00:55:09.018455 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.018463 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.018471 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.018479 | orchestrator | 2025-09-16 00:55:09.018487 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-16 00:55:09.018501 | orchestrator | Tuesday 16 September 2025 00:50:50 +0000 (0:00:00.709) 0:06:20.362 ***** 2025-09-16 00:55:09.018509 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.018516 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.018524 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.018532 | orchestrator | 2025-09-16 00:55:09.018540 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-16 00:55:09.018548 | orchestrator | Tuesday 16 September 2025 00:50:51 +0000 (0:00:00.677) 0:06:21.040 ***** 2025-09-16 00:55:09.018555 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.018563 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.018571 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.018579 | orchestrator | 2025-09-16 00:55:09.018587 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-16 00:55:09.018595 | orchestrator | Tuesday 16 September 2025 00:50:52 +0000 (0:00:00.529) 0:06:21.569 ***** 2025-09-16 00:55:09.018603 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.018611 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.018619 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.018626 | orchestrator | 2025-09-16 00:55:09.018634 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-16 00:55:09.018642 | orchestrator | Tuesday 16 September 2025 00:50:52 +0000 (0:00:00.309) 0:06:21.878 ***** 2025-09-16 00:55:09.018650 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.018658 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.018666 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.018673 | orchestrator | 2025-09-16 00:55:09.018681 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-16 00:55:09.018689 | orchestrator | Tuesday 16 September 2025 00:50:52 +0000 (0:00:00.286) 0:06:22.165 ***** 2025-09-16 00:55:09.018697 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.018705 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.018713 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.018721 | orchestrator | 2025-09-16 00:55:09.018728 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-16 00:55:09.018736 | orchestrator | Tuesday 16 September 2025 00:50:53 +0000 (0:00:00.302) 0:06:22.467 ***** 2025-09-16 00:55:09.018744 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.018752 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.018760 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.018768 | orchestrator | 2025-09-16 00:55:09.018776 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-16 00:55:09.018795 | orchestrator | Tuesday 16 September 2025 00:50:53 +0000 (0:00:00.524) 0:06:22.992 ***** 2025-09-16 00:55:09.018804 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.018812 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.018820 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.018827 | orchestrator | 2025-09-16 00:55:09.018835 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-16 00:55:09.018843 | orchestrator | Tuesday 16 September 2025 00:50:53 +0000 (0:00:00.309) 0:06:23.302 ***** 2025-09-16 00:55:09.018851 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.018859 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.018867 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.018875 | orchestrator | 2025-09-16 00:55:09.018883 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-16 00:55:09.018891 | orchestrator | Tuesday 16 September 2025 00:50:54 +0000 (0:00:00.277) 0:06:23.579 ***** 2025-09-16 00:55:09.018898 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.018906 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.018914 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.018922 | orchestrator | 2025-09-16 00:55:09.018930 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-16 00:55:09.018938 | orchestrator | Tuesday 16 September 2025 00:50:54 +0000 (0:00:00.346) 0:06:23.925 ***** 2025-09-16 00:55:09.018950 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.018958 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.018966 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.018974 | orchestrator | 2025-09-16 00:55:09.018981 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-16 00:55:09.018989 | orchestrator | Tuesday 16 September 2025 00:50:55 +0000 (0:00:00.535) 0:06:24.461 ***** 2025-09-16 00:55:09.018997 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.019005 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.019013 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.019020 | orchestrator | 2025-09-16 00:55:09.019028 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-16 00:55:09.019036 | orchestrator | Tuesday 16 September 2025 00:50:55 +0000 (0:00:00.514) 0:06:24.976 ***** 2025-09-16 00:55:09.019044 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.019052 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.019060 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.019067 | orchestrator | 2025-09-16 00:55:09.019075 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-16 00:55:09.019083 | orchestrator | Tuesday 16 September 2025 00:50:55 +0000 (0:00:00.308) 0:06:25.285 ***** 2025-09-16 00:55:09.019091 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-16 00:55:09.019103 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-16 00:55:09.019111 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-16 00:55:09.019119 | orchestrator | 2025-09-16 00:55:09.019127 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-16 00:55:09.019135 | orchestrator | Tuesday 16 September 2025 00:50:56 +0000 (0:00:00.856) 0:06:26.142 ***** 2025-09-16 00:55:09.019143 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.019151 | orchestrator | 2025-09-16 00:55:09.019159 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-16 00:55:09.019167 | orchestrator | Tuesday 16 September 2025 00:50:57 +0000 (0:00:00.847) 0:06:26.989 ***** 2025-09-16 00:55:09.019174 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.019182 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.019190 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.019198 | orchestrator | 2025-09-16 00:55:09.019206 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-16 00:55:09.019214 | orchestrator | Tuesday 16 September 2025 00:50:57 +0000 (0:00:00.290) 0:06:27.279 ***** 2025-09-16 00:55:09.019222 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.019230 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.019238 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.019246 | orchestrator | 2025-09-16 00:55:09.019254 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-16 00:55:09.019261 | orchestrator | Tuesday 16 September 2025 00:50:58 +0000 (0:00:00.278) 0:06:27.557 ***** 2025-09-16 00:55:09.019269 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.019277 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.019285 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.019293 | orchestrator | 2025-09-16 00:55:09.019301 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-16 00:55:09.019309 | orchestrator | Tuesday 16 September 2025 00:50:58 +0000 (0:00:00.865) 0:06:28.423 ***** 2025-09-16 00:55:09.019317 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.019324 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.019332 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.019340 | orchestrator | 2025-09-16 00:55:09.019348 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-16 00:55:09.019377 | orchestrator | Tuesday 16 September 2025 00:50:59 +0000 (0:00:00.322) 0:06:28.746 ***** 2025-09-16 00:55:09.019390 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-16 00:55:09.019398 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-16 00:55:09.019406 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-16 00:55:09.019414 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-16 00:55:09.019422 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-16 00:55:09.019430 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-16 00:55:09.019438 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-16 00:55:09.019445 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-16 00:55:09.019453 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-16 00:55:09.019461 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-16 00:55:09.019469 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-16 00:55:09.019476 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-16 00:55:09.019484 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-16 00:55:09.019492 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-16 00:55:09.019500 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-16 00:55:09.019507 | orchestrator | 2025-09-16 00:55:09.019515 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-16 00:55:09.019526 | orchestrator | Tuesday 16 September 2025 00:51:03 +0000 (0:00:04.195) 0:06:32.941 ***** 2025-09-16 00:55:09.019535 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.019542 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.019550 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.019558 | orchestrator | 2025-09-16 00:55:09.019566 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-16 00:55:09.019574 | orchestrator | Tuesday 16 September 2025 00:51:03 +0000 (0:00:00.270) 0:06:33.212 ***** 2025-09-16 00:55:09.019582 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.019590 | orchestrator | 2025-09-16 00:55:09.019597 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-16 00:55:09.019605 | orchestrator | Tuesday 16 September 2025 00:51:04 +0000 (0:00:00.732) 0:06:33.945 ***** 2025-09-16 00:55:09.019613 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-16 00:55:09.019621 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-16 00:55:09.019629 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-16 00:55:09.019641 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-16 00:55:09.019649 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-16 00:55:09.019657 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-16 00:55:09.019665 | orchestrator | 2025-09-16 00:55:09.019673 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-16 00:55:09.019681 | orchestrator | Tuesday 16 September 2025 00:51:05 +0000 (0:00:01.030) 0:06:34.975 ***** 2025-09-16 00:55:09.019689 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:55:09.019697 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-16 00:55:09.019705 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-16 00:55:09.019718 | orchestrator | 2025-09-16 00:55:09.019726 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-16 00:55:09.019734 | orchestrator | Tuesday 16 September 2025 00:51:07 +0000 (0:00:02.082) 0:06:37.057 ***** 2025-09-16 00:55:09.019741 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-16 00:55:09.019749 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-16 00:55:09.019757 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.019765 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-16 00:55:09.019773 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-16 00:55:09.019781 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.019822 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-16 00:55:09.019830 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-16 00:55:09.019838 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.019846 | orchestrator | 2025-09-16 00:55:09.019854 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-16 00:55:09.019862 | orchestrator | Tuesday 16 September 2025 00:51:09 +0000 (0:00:01.428) 0:06:38.486 ***** 2025-09-16 00:55:09.019870 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-16 00:55:09.019878 | orchestrator | 2025-09-16 00:55:09.019886 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-16 00:55:09.019894 | orchestrator | Tuesday 16 September 2025 00:51:11 +0000 (0:00:02.135) 0:06:40.621 ***** 2025-09-16 00:55:09.019902 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.019910 | orchestrator | 2025-09-16 00:55:09.019918 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-16 00:55:09.019925 | orchestrator | Tuesday 16 September 2025 00:51:11 +0000 (0:00:00.569) 0:06:41.191 ***** 2025-09-16 00:55:09.019934 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-457b984f-2001-5589-9984-9a697803acd2', 'data_vg': 'ceph-457b984f-2001-5589-9984-9a697803acd2'}) 2025-09-16 00:55:09.019943 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a154e298-15cb-5d50-9a1c-17bc1371db7e', 'data_vg': 'ceph-a154e298-15cb-5d50-9a1c-17bc1371db7e'}) 2025-09-16 00:55:09.019950 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8832b43a-4370-5f7f-b8ca-e1ef860202d6', 'data_vg': 'ceph-8832b43a-4370-5f7f-b8ca-e1ef860202d6'}) 2025-09-16 00:55:09.019957 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-56010334-63d7-5603-a2fe-432c47d6dcb8', 'data_vg': 'ceph-56010334-63d7-5603-a2fe-432c47d6dcb8'}) 2025-09-16 00:55:09.019963 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d2877fc6-62dc-51ad-b157-4c09a4f274b5', 'data_vg': 'ceph-d2877fc6-62dc-51ad-b157-4c09a4f274b5'}) 2025-09-16 00:55:09.019970 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b409e677-b998-57d2-be40-43b65c9fb72d', 'data_vg': 'ceph-b409e677-b998-57d2-be40-43b65c9fb72d'}) 2025-09-16 00:55:09.019977 | orchestrator | 2025-09-16 00:55:09.019984 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-16 00:55:09.019991 | orchestrator | Tuesday 16 September 2025 00:51:55 +0000 (0:00:43.254) 0:07:24.445 ***** 2025-09-16 00:55:09.019997 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.020004 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.020011 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.020017 | orchestrator | 2025-09-16 00:55:09.020024 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-16 00:55:09.020031 | orchestrator | Tuesday 16 September 2025 00:51:55 +0000 (0:00:00.534) 0:07:24.980 ***** 2025-09-16 00:55:09.020041 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.020048 | orchestrator | 2025-09-16 00:55:09.020055 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-16 00:55:09.020061 | orchestrator | Tuesday 16 September 2025 00:51:56 +0000 (0:00:00.507) 0:07:25.488 ***** 2025-09-16 00:55:09.020072 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.020079 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.020086 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.020093 | orchestrator | 2025-09-16 00:55:09.020099 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-16 00:55:09.020106 | orchestrator | Tuesday 16 September 2025 00:51:56 +0000 (0:00:00.666) 0:07:26.154 ***** 2025-09-16 00:55:09.020112 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.020119 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.020126 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.020132 | orchestrator | 2025-09-16 00:55:09.020139 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-16 00:55:09.020146 | orchestrator | Tuesday 16 September 2025 00:51:59 +0000 (0:00:02.758) 0:07:28.912 ***** 2025-09-16 00:55:09.020153 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.020159 | orchestrator | 2025-09-16 00:55:09.020170 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-16 00:55:09.020177 | orchestrator | Tuesday 16 September 2025 00:51:59 +0000 (0:00:00.519) 0:07:29.432 ***** 2025-09-16 00:55:09.020183 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.020190 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.020197 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.020203 | orchestrator | 2025-09-16 00:55:09.020210 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-16 00:55:09.020217 | orchestrator | Tuesday 16 September 2025 00:52:01 +0000 (0:00:01.202) 0:07:30.634 ***** 2025-09-16 00:55:09.020224 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.020230 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.020237 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.020244 | orchestrator | 2025-09-16 00:55:09.020250 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-16 00:55:09.020257 | orchestrator | Tuesday 16 September 2025 00:52:02 +0000 (0:00:01.340) 0:07:31.975 ***** 2025-09-16 00:55:09.020264 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.020271 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.020277 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.020284 | orchestrator | 2025-09-16 00:55:09.020291 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-16 00:55:09.020297 | orchestrator | Tuesday 16 September 2025 00:52:04 +0000 (0:00:01.676) 0:07:33.651 ***** 2025-09-16 00:55:09.020304 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.020311 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.020317 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.020324 | orchestrator | 2025-09-16 00:55:09.020331 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-16 00:55:09.020337 | orchestrator | Tuesday 16 September 2025 00:52:04 +0000 (0:00:00.300) 0:07:33.952 ***** 2025-09-16 00:55:09.020344 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.020351 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.020357 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.020364 | orchestrator | 2025-09-16 00:55:09.020371 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-16 00:55:09.020377 | orchestrator | Tuesday 16 September 2025 00:52:04 +0000 (0:00:00.312) 0:07:34.265 ***** 2025-09-16 00:55:09.020384 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-09-16 00:55:09.020391 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-09-16 00:55:09.020397 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-09-16 00:55:09.020404 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-16 00:55:09.020410 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-09-16 00:55:09.020417 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-09-16 00:55:09.020424 | orchestrator | 2025-09-16 00:55:09.020430 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-16 00:55:09.020441 | orchestrator | Tuesday 16 September 2025 00:52:06 +0000 (0:00:01.260) 0:07:35.525 ***** 2025-09-16 00:55:09.020448 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-09-16 00:55:09.020455 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-16 00:55:09.020461 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-16 00:55:09.020468 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-16 00:55:09.020474 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-09-16 00:55:09.020481 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-16 00:55:09.020488 | orchestrator | 2025-09-16 00:55:09.020494 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-16 00:55:09.020501 | orchestrator | Tuesday 16 September 2025 00:52:08 +0000 (0:00:02.143) 0:07:37.669 ***** 2025-09-16 00:55:09.020508 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-09-16 00:55:09.020514 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-16 00:55:09.020521 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-16 00:55:09.020528 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-16 00:55:09.020534 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-16 00:55:09.020541 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-09-16 00:55:09.020547 | orchestrator | 2025-09-16 00:55:09.020554 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-16 00:55:09.020561 | orchestrator | Tuesday 16 September 2025 00:52:11 +0000 (0:00:03.484) 0:07:41.153 ***** 2025-09-16 00:55:09.020567 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.020574 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.020581 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-16 00:55:09.020587 | orchestrator | 2025-09-16 00:55:09.020594 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-16 00:55:09.020601 | orchestrator | Tuesday 16 September 2025 00:52:14 +0000 (0:00:02.321) 0:07:43.475 ***** 2025-09-16 00:55:09.020610 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.020617 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.020624 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-16 00:55:09.020631 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-16 00:55:09.020638 | orchestrator | 2025-09-16 00:55:09.020644 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-16 00:55:09.020651 | orchestrator | Tuesday 16 September 2025 00:52:26 +0000 (0:00:12.763) 0:07:56.238 ***** 2025-09-16 00:55:09.020658 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.020665 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.020671 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.020678 | orchestrator | 2025-09-16 00:55:09.020684 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-16 00:55:09.020691 | orchestrator | Tuesday 16 September 2025 00:52:27 +0000 (0:00:00.844) 0:07:57.083 ***** 2025-09-16 00:55:09.020698 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.020704 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.020711 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.020718 | orchestrator | 2025-09-16 00:55:09.020725 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-16 00:55:09.020734 | orchestrator | Tuesday 16 September 2025 00:52:28 +0000 (0:00:00.512) 0:07:57.595 ***** 2025-09-16 00:55:09.020741 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.020748 | orchestrator | 2025-09-16 00:55:09.020755 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-16 00:55:09.020761 | orchestrator | Tuesday 16 September 2025 00:52:28 +0000 (0:00:00.488) 0:07:58.084 ***** 2025-09-16 00:55:09.020768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:55:09.020778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:55:09.020796 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:55:09.020803 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.020810 | orchestrator | 2025-09-16 00:55:09.020817 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-16 00:55:09.020823 | orchestrator | Tuesday 16 September 2025 00:52:29 +0000 (0:00:00.351) 0:07:58.435 ***** 2025-09-16 00:55:09.020830 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.020837 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.020843 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.020850 | orchestrator | 2025-09-16 00:55:09.020857 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-16 00:55:09.020863 | orchestrator | Tuesday 16 September 2025 00:52:29 +0000 (0:00:00.250) 0:07:58.685 ***** 2025-09-16 00:55:09.020870 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.020877 | orchestrator | 2025-09-16 00:55:09.020883 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-16 00:55:09.020890 | orchestrator | Tuesday 16 September 2025 00:52:29 +0000 (0:00:00.204) 0:07:58.890 ***** 2025-09-16 00:55:09.020896 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.020903 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.020910 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.020916 | orchestrator | 2025-09-16 00:55:09.020923 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-16 00:55:09.020930 | orchestrator | Tuesday 16 September 2025 00:52:29 +0000 (0:00:00.411) 0:07:59.301 ***** 2025-09-16 00:55:09.020936 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.020943 | orchestrator | 2025-09-16 00:55:09.020950 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-16 00:55:09.020956 | orchestrator | Tuesday 16 September 2025 00:52:30 +0000 (0:00:00.193) 0:07:59.495 ***** 2025-09-16 00:55:09.020963 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.020969 | orchestrator | 2025-09-16 00:55:09.020976 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-16 00:55:09.020982 | orchestrator | Tuesday 16 September 2025 00:52:30 +0000 (0:00:00.183) 0:07:59.678 ***** 2025-09-16 00:55:09.020989 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.020996 | orchestrator | 2025-09-16 00:55:09.021002 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-16 00:55:09.021009 | orchestrator | Tuesday 16 September 2025 00:52:30 +0000 (0:00:00.108) 0:07:59.787 ***** 2025-09-16 00:55:09.021015 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.021022 | orchestrator | 2025-09-16 00:55:09.021029 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-16 00:55:09.021035 | orchestrator | Tuesday 16 September 2025 00:52:30 +0000 (0:00:00.209) 0:07:59.996 ***** 2025-09-16 00:55:09.021042 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.021048 | orchestrator | 2025-09-16 00:55:09.021055 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-16 00:55:09.021062 | orchestrator | Tuesday 16 September 2025 00:52:30 +0000 (0:00:00.249) 0:08:00.246 ***** 2025-09-16 00:55:09.021068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:55:09.021075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:55:09.021082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:55:09.021088 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.021095 | orchestrator | 2025-09-16 00:55:09.021102 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-16 00:55:09.021108 | orchestrator | Tuesday 16 September 2025 00:52:31 +0000 (0:00:00.367) 0:08:00.614 ***** 2025-09-16 00:55:09.021115 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.021122 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.021128 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.021139 | orchestrator | 2025-09-16 00:55:09.021146 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-16 00:55:09.021153 | orchestrator | Tuesday 16 September 2025 00:52:31 +0000 (0:00:00.271) 0:08:00.885 ***** 2025-09-16 00:55:09.021162 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.021169 | orchestrator | 2025-09-16 00:55:09.021176 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-16 00:55:09.021182 | orchestrator | Tuesday 16 September 2025 00:52:32 +0000 (0:00:00.636) 0:08:01.521 ***** 2025-09-16 00:55:09.021189 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.021195 | orchestrator | 2025-09-16 00:55:09.021202 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-16 00:55:09.021209 | orchestrator | 2025-09-16 00:55:09.021215 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-16 00:55:09.021222 | orchestrator | Tuesday 16 September 2025 00:52:32 +0000 (0:00:00.639) 0:08:02.161 ***** 2025-09-16 00:55:09.021229 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.021236 | orchestrator | 2025-09-16 00:55:09.021242 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-16 00:55:09.021249 | orchestrator | Tuesday 16 September 2025 00:52:33 +0000 (0:00:00.987) 0:08:03.149 ***** 2025-09-16 00:55:09.021259 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.021266 | orchestrator | 2025-09-16 00:55:09.021273 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-16 00:55:09.021280 | orchestrator | Tuesday 16 September 2025 00:52:34 +0000 (0:00:00.987) 0:08:04.137 ***** 2025-09-16 00:55:09.021286 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.021293 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.021300 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.021307 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.021314 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.021320 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.021327 | orchestrator | 2025-09-16 00:55:09.021334 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-16 00:55:09.021341 | orchestrator | Tuesday 16 September 2025 00:52:35 +0000 (0:00:01.047) 0:08:05.185 ***** 2025-09-16 00:55:09.021347 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.021354 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.021361 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.021368 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.021374 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.021381 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.021388 | orchestrator | 2025-09-16 00:55:09.021395 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-16 00:55:09.021401 | orchestrator | Tuesday 16 September 2025 00:52:36 +0000 (0:00:00.798) 0:08:05.983 ***** 2025-09-16 00:55:09.021408 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.021415 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.021422 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.021429 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.021435 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.021442 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.021449 | orchestrator | 2025-09-16 00:55:09.021455 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-16 00:55:09.021462 | orchestrator | Tuesday 16 September 2025 00:52:37 +0000 (0:00:00.859) 0:08:06.843 ***** 2025-09-16 00:55:09.021469 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.021476 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.021482 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.021489 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.021499 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.021506 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.021513 | orchestrator | 2025-09-16 00:55:09.021519 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-16 00:55:09.021526 | orchestrator | Tuesday 16 September 2025 00:52:38 +0000 (0:00:00.717) 0:08:07.560 ***** 2025-09-16 00:55:09.021533 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.021539 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.021546 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.021552 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.021559 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.021565 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.021572 | orchestrator | 2025-09-16 00:55:09.021579 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-16 00:55:09.021585 | orchestrator | Tuesday 16 September 2025 00:52:39 +0000 (0:00:00.965) 0:08:08.526 ***** 2025-09-16 00:55:09.021592 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.021599 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.021605 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.021612 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.021618 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.021625 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.021632 | orchestrator | 2025-09-16 00:55:09.021638 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-16 00:55:09.021645 | orchestrator | Tuesday 16 September 2025 00:52:39 +0000 (0:00:00.773) 0:08:09.300 ***** 2025-09-16 00:55:09.021652 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.021658 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.021665 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.021671 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.021678 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.021684 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.021691 | orchestrator | 2025-09-16 00:55:09.021697 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-16 00:55:09.021704 | orchestrator | Tuesday 16 September 2025 00:52:40 +0000 (0:00:00.574) 0:08:09.875 ***** 2025-09-16 00:55:09.021711 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.021717 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.021724 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.021731 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.021737 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.021744 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.021750 | orchestrator | 2025-09-16 00:55:09.021757 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-16 00:55:09.021766 | orchestrator | Tuesday 16 September 2025 00:52:41 +0000 (0:00:01.231) 0:08:11.106 ***** 2025-09-16 00:55:09.021773 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.021780 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.021797 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.021803 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.021810 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.021816 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.021823 | orchestrator | 2025-09-16 00:55:09.021829 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-16 00:55:09.021836 | orchestrator | Tuesday 16 September 2025 00:52:42 +0000 (0:00:00.995) 0:08:12.102 ***** 2025-09-16 00:55:09.021843 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.021849 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.021856 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.021863 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.021869 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.021876 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.021882 | orchestrator | 2025-09-16 00:55:09.021889 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-16 00:55:09.021899 | orchestrator | Tuesday 16 September 2025 00:52:43 +0000 (0:00:00.788) 0:08:12.890 ***** 2025-09-16 00:55:09.021906 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.021913 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.021923 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.021930 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.021936 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.021943 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.021949 | orchestrator | 2025-09-16 00:55:09.021956 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-16 00:55:09.021963 | orchestrator | Tuesday 16 September 2025 00:52:44 +0000 (0:00:00.592) 0:08:13.483 ***** 2025-09-16 00:55:09.021969 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.021976 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.021982 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.021989 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.021996 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.022002 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.022009 | orchestrator | 2025-09-16 00:55:09.022041 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-16 00:55:09.022049 | orchestrator | Tuesday 16 September 2025 00:52:44 +0000 (0:00:00.828) 0:08:14.312 ***** 2025-09-16 00:55:09.022056 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.022062 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.022069 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.022076 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.022082 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.022089 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.022096 | orchestrator | 2025-09-16 00:55:09.022102 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-16 00:55:09.022109 | orchestrator | Tuesday 16 September 2025 00:52:45 +0000 (0:00:00.584) 0:08:14.897 ***** 2025-09-16 00:55:09.022116 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.022122 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.022129 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.022135 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.022142 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.022149 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.022155 | orchestrator | 2025-09-16 00:55:09.022162 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-16 00:55:09.022169 | orchestrator | Tuesday 16 September 2025 00:52:46 +0000 (0:00:00.782) 0:08:15.679 ***** 2025-09-16 00:55:09.022175 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.022182 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.022189 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.022195 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.022202 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.022208 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.022215 | orchestrator | 2025-09-16 00:55:09.022222 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-16 00:55:09.022228 | orchestrator | Tuesday 16 September 2025 00:52:46 +0000 (0:00:00.638) 0:08:16.318 ***** 2025-09-16 00:55:09.022235 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.022242 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.022248 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.022255 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:55:09.022261 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:55:09.022268 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:55:09.022274 | orchestrator | 2025-09-16 00:55:09.022281 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-16 00:55:09.022288 | orchestrator | Tuesday 16 September 2025 00:52:47 +0000 (0:00:00.788) 0:08:17.107 ***** 2025-09-16 00:55:09.022294 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.022301 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.022312 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.022319 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.022325 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.022332 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.022338 | orchestrator | 2025-09-16 00:55:09.022345 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-16 00:55:09.022352 | orchestrator | Tuesday 16 September 2025 00:52:48 +0000 (0:00:00.611) 0:08:17.718 ***** 2025-09-16 00:55:09.022358 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.022365 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.022372 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.022378 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.022385 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.022391 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.022398 | orchestrator | 2025-09-16 00:55:09.022405 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-16 00:55:09.022411 | orchestrator | Tuesday 16 September 2025 00:52:49 +0000 (0:00:00.776) 0:08:18.494 ***** 2025-09-16 00:55:09.022418 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.022424 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.022431 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.022437 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.022444 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.022451 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.022457 | orchestrator | 2025-09-16 00:55:09.022464 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-16 00:55:09.022476 | orchestrator | Tuesday 16 September 2025 00:52:50 +0000 (0:00:01.168) 0:08:19.663 ***** 2025-09-16 00:55:09.022483 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-16 00:55:09.022489 | orchestrator | 2025-09-16 00:55:09.022496 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-16 00:55:09.022502 | orchestrator | Tuesday 16 September 2025 00:52:54 +0000 (0:00:03.931) 0:08:23.594 ***** 2025-09-16 00:55:09.022509 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-16 00:55:09.022515 | orchestrator | 2025-09-16 00:55:09.022522 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-16 00:55:09.022529 | orchestrator | Tuesday 16 September 2025 00:52:56 +0000 (0:00:01.939) 0:08:25.533 ***** 2025-09-16 00:55:09.022535 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.022542 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.022549 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.022555 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.022562 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.022569 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.022575 | orchestrator | 2025-09-16 00:55:09.022582 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-16 00:55:09.022589 | orchestrator | Tuesday 16 September 2025 00:52:57 +0000 (0:00:01.456) 0:08:26.990 ***** 2025-09-16 00:55:09.022599 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.022606 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.022613 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.022619 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.022626 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.022632 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.022639 | orchestrator | 2025-09-16 00:55:09.022646 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-16 00:55:09.022652 | orchestrator | Tuesday 16 September 2025 00:52:58 +0000 (0:00:01.155) 0:08:28.145 ***** 2025-09-16 00:55:09.022659 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.022667 | orchestrator | 2025-09-16 00:55:09.022673 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-16 00:55:09.022680 | orchestrator | Tuesday 16 September 2025 00:52:59 +0000 (0:00:01.258) 0:08:29.404 ***** 2025-09-16 00:55:09.022690 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.022697 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.022704 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.022710 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.022717 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.022723 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.022730 | orchestrator | 2025-09-16 00:55:09.022736 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-16 00:55:09.022743 | orchestrator | Tuesday 16 September 2025 00:53:01 +0000 (0:00:01.546) 0:08:30.950 ***** 2025-09-16 00:55:09.022750 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.022756 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.022763 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.022769 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.022776 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.022783 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.022799 | orchestrator | 2025-09-16 00:55:09.022806 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-16 00:55:09.022812 | orchestrator | Tuesday 16 September 2025 00:53:05 +0000 (0:00:03.510) 0:08:34.461 ***** 2025-09-16 00:55:09.022819 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:55:09.022826 | orchestrator | 2025-09-16 00:55:09.022833 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-16 00:55:09.022840 | orchestrator | Tuesday 16 September 2025 00:53:06 +0000 (0:00:01.184) 0:08:35.646 ***** 2025-09-16 00:55:09.022846 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.022853 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.022860 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.022866 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.022873 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.022880 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.022886 | orchestrator | 2025-09-16 00:55:09.022893 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-16 00:55:09.022900 | orchestrator | Tuesday 16 September 2025 00:53:06 +0000 (0:00:00.585) 0:08:36.232 ***** 2025-09-16 00:55:09.022906 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.022913 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.022920 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.022926 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:55:09.022933 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:55:09.022940 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:55:09.022946 | orchestrator | 2025-09-16 00:55:09.022953 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-16 00:55:09.022960 | orchestrator | Tuesday 16 September 2025 00:53:09 +0000 (0:00:02.434) 0:08:38.666 ***** 2025-09-16 00:55:09.022966 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.022973 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.022980 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.022986 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:55:09.022993 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:55:09.022999 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:55:09.023006 | orchestrator | 2025-09-16 00:55:09.023013 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-16 00:55:09.023019 | orchestrator | 2025-09-16 00:55:09.023026 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-16 00:55:09.023033 | orchestrator | Tuesday 16 September 2025 00:53:10 +0000 (0:00:00.848) 0:08:39.515 ***** 2025-09-16 00:55:09.023039 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.023046 | orchestrator | 2025-09-16 00:55:09.023057 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-16 00:55:09.023067 | orchestrator | Tuesday 16 September 2025 00:53:10 +0000 (0:00:00.802) 0:08:40.318 ***** 2025-09-16 00:55:09.023074 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.023081 | orchestrator | 2025-09-16 00:55:09.023088 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-16 00:55:09.023094 | orchestrator | Tuesday 16 September 2025 00:53:11 +0000 (0:00:00.479) 0:08:40.797 ***** 2025-09-16 00:55:09.023101 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.023108 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.023114 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.023121 | orchestrator | 2025-09-16 00:55:09.023128 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-16 00:55:09.023134 | orchestrator | Tuesday 16 September 2025 00:53:11 +0000 (0:00:00.530) 0:08:41.327 ***** 2025-09-16 00:55:09.023141 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.023148 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.023154 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.023161 | orchestrator | 2025-09-16 00:55:09.023167 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-16 00:55:09.023177 | orchestrator | Tuesday 16 September 2025 00:53:12 +0000 (0:00:00.756) 0:08:42.084 ***** 2025-09-16 00:55:09.023184 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.023191 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.023198 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.023204 | orchestrator | 2025-09-16 00:55:09.023211 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-16 00:55:09.023218 | orchestrator | Tuesday 16 September 2025 00:53:13 +0000 (0:00:00.683) 0:08:42.768 ***** 2025-09-16 00:55:09.023224 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.023231 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.023237 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.023244 | orchestrator | 2025-09-16 00:55:09.023251 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-16 00:55:09.023257 | orchestrator | Tuesday 16 September 2025 00:53:14 +0000 (0:00:00.718) 0:08:43.486 ***** 2025-09-16 00:55:09.023264 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.023271 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.023277 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.023284 | orchestrator | 2025-09-16 00:55:09.023291 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-16 00:55:09.023297 | orchestrator | Tuesday 16 September 2025 00:53:14 +0000 (0:00:00.534) 0:08:44.020 ***** 2025-09-16 00:55:09.023304 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.023310 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.023317 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.023324 | orchestrator | 2025-09-16 00:55:09.023331 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-16 00:55:09.023337 | orchestrator | Tuesday 16 September 2025 00:53:14 +0000 (0:00:00.284) 0:08:44.305 ***** 2025-09-16 00:55:09.023344 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.023350 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.023357 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.023364 | orchestrator | 2025-09-16 00:55:09.023370 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-16 00:55:09.023377 | orchestrator | Tuesday 16 September 2025 00:53:15 +0000 (0:00:00.345) 0:08:44.650 ***** 2025-09-16 00:55:09.023384 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.023390 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.023397 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.023403 | orchestrator | 2025-09-16 00:55:09.023410 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-16 00:55:09.023416 | orchestrator | Tuesday 16 September 2025 00:53:15 +0000 (0:00:00.742) 0:08:45.393 ***** 2025-09-16 00:55:09.023427 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.023433 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.023440 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.023447 | orchestrator | 2025-09-16 00:55:09.023453 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-16 00:55:09.023460 | orchestrator | Tuesday 16 September 2025 00:53:16 +0000 (0:00:00.996) 0:08:46.389 ***** 2025-09-16 00:55:09.023467 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.023473 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.023480 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.023486 | orchestrator | 2025-09-16 00:55:09.023493 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-16 00:55:09.023500 | orchestrator | Tuesday 16 September 2025 00:53:17 +0000 (0:00:00.320) 0:08:46.709 ***** 2025-09-16 00:55:09.023506 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.023513 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.023520 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.023526 | orchestrator | 2025-09-16 00:55:09.023533 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-16 00:55:09.023540 | orchestrator | Tuesday 16 September 2025 00:53:17 +0000 (0:00:00.321) 0:08:47.030 ***** 2025-09-16 00:55:09.023546 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.023553 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.023560 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.023566 | orchestrator | 2025-09-16 00:55:09.023573 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-16 00:55:09.023580 | orchestrator | Tuesday 16 September 2025 00:53:17 +0000 (0:00:00.300) 0:08:47.331 ***** 2025-09-16 00:55:09.023586 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.023593 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.023599 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.023606 | orchestrator | 2025-09-16 00:55:09.023613 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-16 00:55:09.023619 | orchestrator | Tuesday 16 September 2025 00:53:18 +0000 (0:00:00.589) 0:08:47.921 ***** 2025-09-16 00:55:09.023626 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.023632 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.023639 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.023645 | orchestrator | 2025-09-16 00:55:09.023652 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-16 00:55:09.023661 | orchestrator | Tuesday 16 September 2025 00:53:18 +0000 (0:00:00.327) 0:08:48.248 ***** 2025-09-16 00:55:09.023668 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.023675 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.023681 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.023688 | orchestrator | 2025-09-16 00:55:09.023695 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-16 00:55:09.023701 | orchestrator | Tuesday 16 September 2025 00:53:19 +0000 (0:00:00.291) 0:08:48.539 ***** 2025-09-16 00:55:09.023708 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.023715 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.023721 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.023728 | orchestrator | 2025-09-16 00:55:09.023735 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-16 00:55:09.023741 | orchestrator | Tuesday 16 September 2025 00:53:19 +0000 (0:00:00.304) 0:08:48.844 ***** 2025-09-16 00:55:09.023748 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.023755 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.023761 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.023768 | orchestrator | 2025-09-16 00:55:09.023775 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-16 00:55:09.023781 | orchestrator | Tuesday 16 September 2025 00:53:19 +0000 (0:00:00.528) 0:08:49.372 ***** 2025-09-16 00:55:09.023799 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.023809 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.023820 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.023827 | orchestrator | 2025-09-16 00:55:09.023834 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-16 00:55:09.023840 | orchestrator | Tuesday 16 September 2025 00:53:20 +0000 (0:00:00.371) 0:08:49.744 ***** 2025-09-16 00:55:09.023847 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.023854 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.023860 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.023867 | orchestrator | 2025-09-16 00:55:09.023874 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-16 00:55:09.023880 | orchestrator | Tuesday 16 September 2025 00:53:20 +0000 (0:00:00.540) 0:08:50.285 ***** 2025-09-16 00:55:09.023887 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.023894 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.023900 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-16 00:55:09.023907 | orchestrator | 2025-09-16 00:55:09.023914 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-16 00:55:09.023920 | orchestrator | Tuesday 16 September 2025 00:53:21 +0000 (0:00:00.683) 0:08:50.968 ***** 2025-09-16 00:55:09.023927 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-16 00:55:09.023934 | orchestrator | 2025-09-16 00:55:09.023940 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-16 00:55:09.023947 | orchestrator | Tuesday 16 September 2025 00:53:23 +0000 (0:00:02.177) 0:08:53.146 ***** 2025-09-16 00:55:09.023954 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-16 00:55:09.023962 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.023969 | orchestrator | 2025-09-16 00:55:09.023976 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-16 00:55:09.023982 | orchestrator | Tuesday 16 September 2025 00:53:23 +0000 (0:00:00.199) 0:08:53.345 ***** 2025-09-16 00:55:09.023990 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-16 00:55:09.024001 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-16 00:55:09.024008 | orchestrator | 2025-09-16 00:55:09.024015 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-16 00:55:09.024022 | orchestrator | Tuesday 16 September 2025 00:53:32 +0000 (0:00:09.018) 0:09:02.363 ***** 2025-09-16 00:55:09.024028 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-16 00:55:09.024035 | orchestrator | 2025-09-16 00:55:09.024042 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-16 00:55:09.024048 | orchestrator | Tuesday 16 September 2025 00:53:36 +0000 (0:00:03.582) 0:09:05.946 ***** 2025-09-16 00:55:09.024055 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.024062 | orchestrator | 2025-09-16 00:55:09.024069 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-16 00:55:09.024075 | orchestrator | Tuesday 16 September 2025 00:53:37 +0000 (0:00:00.884) 0:09:06.830 ***** 2025-09-16 00:55:09.024082 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-16 00:55:09.024089 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-16 00:55:09.024095 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-16 00:55:09.024106 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-16 00:55:09.024112 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-16 00:55:09.024119 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-16 00:55:09.024126 | orchestrator | 2025-09-16 00:55:09.024136 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-16 00:55:09.024142 | orchestrator | Tuesday 16 September 2025 00:53:38 +0000 (0:00:01.099) 0:09:07.929 ***** 2025-09-16 00:55:09.024149 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:55:09.024156 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-16 00:55:09.024163 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-16 00:55:09.024170 | orchestrator | 2025-09-16 00:55:09.024176 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-16 00:55:09.024183 | orchestrator | Tuesday 16 September 2025 00:53:40 +0000 (0:00:02.312) 0:09:10.242 ***** 2025-09-16 00:55:09.024189 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-16 00:55:09.024196 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-16 00:55:09.024203 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.024210 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-16 00:55:09.024216 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-16 00:55:09.024223 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.024230 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-16 00:55:09.024240 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-16 00:55:09.024247 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.024254 | orchestrator | 2025-09-16 00:55:09.024261 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-16 00:55:09.024267 | orchestrator | Tuesday 16 September 2025 00:53:42 +0000 (0:00:01.211) 0:09:11.454 ***** 2025-09-16 00:55:09.024274 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.024281 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.024287 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.024294 | orchestrator | 2025-09-16 00:55:09.024301 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-16 00:55:09.024307 | orchestrator | Tuesday 16 September 2025 00:53:44 +0000 (0:00:02.653) 0:09:14.107 ***** 2025-09-16 00:55:09.024314 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.024321 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.024327 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.024334 | orchestrator | 2025-09-16 00:55:09.024341 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-16 00:55:09.024347 | orchestrator | Tuesday 16 September 2025 00:53:45 +0000 (0:00:00.467) 0:09:14.575 ***** 2025-09-16 00:55:09.024354 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.024361 | orchestrator | 2025-09-16 00:55:09.024368 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-16 00:55:09.024374 | orchestrator | Tuesday 16 September 2025 00:53:45 +0000 (0:00:00.541) 0:09:15.116 ***** 2025-09-16 00:55:09.024381 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.024388 | orchestrator | 2025-09-16 00:55:09.024394 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-16 00:55:09.024401 | orchestrator | Tuesday 16 September 2025 00:53:46 +0000 (0:00:00.617) 0:09:15.733 ***** 2025-09-16 00:55:09.024408 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.024415 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.024421 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.024428 | orchestrator | 2025-09-16 00:55:09.024435 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-16 00:55:09.024446 | orchestrator | Tuesday 16 September 2025 00:53:47 +0000 (0:00:01.128) 0:09:16.862 ***** 2025-09-16 00:55:09.024453 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.024460 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.024466 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.024473 | orchestrator | 2025-09-16 00:55:09.024480 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-16 00:55:09.024486 | orchestrator | Tuesday 16 September 2025 00:53:48 +0000 (0:00:01.110) 0:09:17.972 ***** 2025-09-16 00:55:09.024493 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.024500 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.024506 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.024513 | orchestrator | 2025-09-16 00:55:09.024520 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-16 00:55:09.024527 | orchestrator | Tuesday 16 September 2025 00:53:50 +0000 (0:00:01.721) 0:09:19.694 ***** 2025-09-16 00:55:09.024533 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.024540 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.024547 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.024553 | orchestrator | 2025-09-16 00:55:09.024560 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-16 00:55:09.024567 | orchestrator | Tuesday 16 September 2025 00:53:52 +0000 (0:00:02.326) 0:09:22.021 ***** 2025-09-16 00:55:09.024573 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.024580 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.024587 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.024594 | orchestrator | 2025-09-16 00:55:09.024600 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-16 00:55:09.024607 | orchestrator | Tuesday 16 September 2025 00:53:53 +0000 (0:00:01.266) 0:09:23.287 ***** 2025-09-16 00:55:09.024614 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.024621 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.024627 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.024634 | orchestrator | 2025-09-16 00:55:09.024640 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-16 00:55:09.024647 | orchestrator | Tuesday 16 September 2025 00:53:54 +0000 (0:00:00.877) 0:09:24.165 ***** 2025-09-16 00:55:09.024654 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.024661 | orchestrator | 2025-09-16 00:55:09.024668 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-16 00:55:09.024677 | orchestrator | Tuesday 16 September 2025 00:53:55 +0000 (0:00:00.515) 0:09:24.680 ***** 2025-09-16 00:55:09.024684 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.024691 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.024697 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.024704 | orchestrator | 2025-09-16 00:55:09.024711 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-16 00:55:09.024717 | orchestrator | Tuesday 16 September 2025 00:53:55 +0000 (0:00:00.293) 0:09:24.973 ***** 2025-09-16 00:55:09.024724 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.024731 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.024737 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.024744 | orchestrator | 2025-09-16 00:55:09.024751 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-16 00:55:09.024757 | orchestrator | Tuesday 16 September 2025 00:53:56 +0000 (0:00:01.456) 0:09:26.430 ***** 2025-09-16 00:55:09.024764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:55:09.024771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:55:09.024777 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:55:09.024810 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.024817 | orchestrator | 2025-09-16 00:55:09.024824 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-16 00:55:09.024840 | orchestrator | Tuesday 16 September 2025 00:53:57 +0000 (0:00:00.620) 0:09:27.051 ***** 2025-09-16 00:55:09.024847 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.024854 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.024861 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.024868 | orchestrator | 2025-09-16 00:55:09.024874 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-16 00:55:09.024881 | orchestrator | 2025-09-16 00:55:09.024888 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-16 00:55:09.024894 | orchestrator | Tuesday 16 September 2025 00:53:58 +0000 (0:00:00.530) 0:09:27.581 ***** 2025-09-16 00:55:09.024901 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.024908 | orchestrator | 2025-09-16 00:55:09.024915 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-16 00:55:09.024921 | orchestrator | Tuesday 16 September 2025 00:53:58 +0000 (0:00:00.704) 0:09:28.286 ***** 2025-09-16 00:55:09.024928 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.024935 | orchestrator | 2025-09-16 00:55:09.024942 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-16 00:55:09.024948 | orchestrator | Tuesday 16 September 2025 00:53:59 +0000 (0:00:00.506) 0:09:28.793 ***** 2025-09-16 00:55:09.024955 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.024962 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.024968 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.024975 | orchestrator | 2025-09-16 00:55:09.024982 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-16 00:55:09.024988 | orchestrator | Tuesday 16 September 2025 00:53:59 +0000 (0:00:00.481) 0:09:29.274 ***** 2025-09-16 00:55:09.024995 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.025002 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.025008 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.025015 | orchestrator | 2025-09-16 00:55:09.025022 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-16 00:55:09.025028 | orchestrator | Tuesday 16 September 2025 00:54:00 +0000 (0:00:00.743) 0:09:30.018 ***** 2025-09-16 00:55:09.025035 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.025042 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.025049 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.025055 | orchestrator | 2025-09-16 00:55:09.025062 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-16 00:55:09.025069 | orchestrator | Tuesday 16 September 2025 00:54:01 +0000 (0:00:00.766) 0:09:30.785 ***** 2025-09-16 00:55:09.025075 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.025082 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.025089 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.025095 | orchestrator | 2025-09-16 00:55:09.025102 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-16 00:55:09.025108 | orchestrator | Tuesday 16 September 2025 00:54:02 +0000 (0:00:00.778) 0:09:31.564 ***** 2025-09-16 00:55:09.025114 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.025121 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.025127 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.025133 | orchestrator | 2025-09-16 00:55:09.025139 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-16 00:55:09.025146 | orchestrator | Tuesday 16 September 2025 00:54:02 +0000 (0:00:00.560) 0:09:32.124 ***** 2025-09-16 00:55:09.025152 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.025158 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.025164 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.025170 | orchestrator | 2025-09-16 00:55:09.025177 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-16 00:55:09.025183 | orchestrator | Tuesday 16 September 2025 00:54:03 +0000 (0:00:00.324) 0:09:32.449 ***** 2025-09-16 00:55:09.025192 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.025199 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.025205 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.025211 | orchestrator | 2025-09-16 00:55:09.025217 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-16 00:55:09.025223 | orchestrator | Tuesday 16 September 2025 00:54:03 +0000 (0:00:00.283) 0:09:32.733 ***** 2025-09-16 00:55:09.025230 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.025236 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.025242 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.025248 | orchestrator | 2025-09-16 00:55:09.025254 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-16 00:55:09.025260 | orchestrator | Tuesday 16 September 2025 00:54:04 +0000 (0:00:00.727) 0:09:33.461 ***** 2025-09-16 00:55:09.025266 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.025276 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.025282 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.025288 | orchestrator | 2025-09-16 00:55:09.025294 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-16 00:55:09.025300 | orchestrator | Tuesday 16 September 2025 00:54:04 +0000 (0:00:00.943) 0:09:34.405 ***** 2025-09-16 00:55:09.025307 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.025313 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.025319 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.025325 | orchestrator | 2025-09-16 00:55:09.025331 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-16 00:55:09.025338 | orchestrator | Tuesday 16 September 2025 00:54:05 +0000 (0:00:00.274) 0:09:34.679 ***** 2025-09-16 00:55:09.025344 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.025350 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.025356 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.025362 | orchestrator | 2025-09-16 00:55:09.025369 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-16 00:55:09.025375 | orchestrator | Tuesday 16 September 2025 00:54:05 +0000 (0:00:00.260) 0:09:34.940 ***** 2025-09-16 00:55:09.025381 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.025387 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.025393 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.025400 | orchestrator | 2025-09-16 00:55:09.025408 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-16 00:55:09.025415 | orchestrator | Tuesday 16 September 2025 00:54:05 +0000 (0:00:00.286) 0:09:35.226 ***** 2025-09-16 00:55:09.025421 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.025427 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.025434 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.025440 | orchestrator | 2025-09-16 00:55:09.025446 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-16 00:55:09.025452 | orchestrator | Tuesday 16 September 2025 00:54:06 +0000 (0:00:00.515) 0:09:35.741 ***** 2025-09-16 00:55:09.025458 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.025465 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.025471 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.025477 | orchestrator | 2025-09-16 00:55:09.025483 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-16 00:55:09.025490 | orchestrator | Tuesday 16 September 2025 00:54:06 +0000 (0:00:00.311) 0:09:36.052 ***** 2025-09-16 00:55:09.025496 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.025502 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.025509 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.025515 | orchestrator | 2025-09-16 00:55:09.025521 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-16 00:55:09.025527 | orchestrator | Tuesday 16 September 2025 00:54:06 +0000 (0:00:00.280) 0:09:36.333 ***** 2025-09-16 00:55:09.025533 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.025543 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.025550 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.025556 | orchestrator | 2025-09-16 00:55:09.025562 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-16 00:55:09.025568 | orchestrator | Tuesday 16 September 2025 00:54:07 +0000 (0:00:00.303) 0:09:36.636 ***** 2025-09-16 00:55:09.025574 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.025580 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.025586 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.025593 | orchestrator | 2025-09-16 00:55:09.025599 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-16 00:55:09.025605 | orchestrator | Tuesday 16 September 2025 00:54:07 +0000 (0:00:00.480) 0:09:37.116 ***** 2025-09-16 00:55:09.025611 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.025617 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.025624 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.025630 | orchestrator | 2025-09-16 00:55:09.025636 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-16 00:55:09.025642 | orchestrator | Tuesday 16 September 2025 00:54:08 +0000 (0:00:00.320) 0:09:37.437 ***** 2025-09-16 00:55:09.025648 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.025654 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.025661 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.025667 | orchestrator | 2025-09-16 00:55:09.025673 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-16 00:55:09.025679 | orchestrator | Tuesday 16 September 2025 00:54:08 +0000 (0:00:00.464) 0:09:37.901 ***** 2025-09-16 00:55:09.025685 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.025692 | orchestrator | 2025-09-16 00:55:09.025698 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-16 00:55:09.025704 | orchestrator | Tuesday 16 September 2025 00:54:09 +0000 (0:00:00.582) 0:09:38.484 ***** 2025-09-16 00:55:09.025710 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:55:09.025716 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-16 00:55:09.025722 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-16 00:55:09.025729 | orchestrator | 2025-09-16 00:55:09.025735 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-16 00:55:09.025741 | orchestrator | Tuesday 16 September 2025 00:54:11 +0000 (0:00:02.066) 0:09:40.550 ***** 2025-09-16 00:55:09.025747 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-16 00:55:09.025753 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-16 00:55:09.025760 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.025766 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-16 00:55:09.025772 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-16 00:55:09.025778 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.025793 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-16 00:55:09.025800 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-16 00:55:09.025806 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.025812 | orchestrator | 2025-09-16 00:55:09.025819 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-16 00:55:09.025829 | orchestrator | Tuesday 16 September 2025 00:54:12 +0000 (0:00:01.307) 0:09:41.858 ***** 2025-09-16 00:55:09.025836 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.025842 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.025848 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.025854 | orchestrator | 2025-09-16 00:55:09.025860 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-16 00:55:09.025867 | orchestrator | Tuesday 16 September 2025 00:54:12 +0000 (0:00:00.306) 0:09:42.164 ***** 2025-09-16 00:55:09.025873 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.025883 | orchestrator | 2025-09-16 00:55:09.025889 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-16 00:55:09.025895 | orchestrator | Tuesday 16 September 2025 00:54:13 +0000 (0:00:00.718) 0:09:42.882 ***** 2025-09-16 00:55:09.025901 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-16 00:55:09.025910 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-16 00:55:09.025917 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-16 00:55:09.025923 | orchestrator | 2025-09-16 00:55:09.025930 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-16 00:55:09.025936 | orchestrator | Tuesday 16 September 2025 00:54:14 +0000 (0:00:00.800) 0:09:43.683 ***** 2025-09-16 00:55:09.025942 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:55:09.025948 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-16 00:55:09.025955 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:55:09.025961 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-16 00:55:09.025967 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:55:09.025974 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-16 00:55:09.025980 | orchestrator | 2025-09-16 00:55:09.025986 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-16 00:55:09.025992 | orchestrator | Tuesday 16 September 2025 00:54:18 +0000 (0:00:04.443) 0:09:48.126 ***** 2025-09-16 00:55:09.025999 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:55:09.026005 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-16 00:55:09.026011 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:55:09.026032 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-16 00:55:09.026039 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:55:09.026045 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-16 00:55:09.026051 | orchestrator | 2025-09-16 00:55:09.026058 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-16 00:55:09.026064 | orchestrator | Tuesday 16 September 2025 00:54:21 +0000 (0:00:02.842) 0:09:50.969 ***** 2025-09-16 00:55:09.026070 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-16 00:55:09.026077 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.026083 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-16 00:55:09.026089 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.026096 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-16 00:55:09.026102 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.026108 | orchestrator | 2025-09-16 00:55:09.026114 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-16 00:55:09.026121 | orchestrator | Tuesday 16 September 2025 00:54:22 +0000 (0:00:01.167) 0:09:52.137 ***** 2025-09-16 00:55:09.026127 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-16 00:55:09.026133 | orchestrator | 2025-09-16 00:55:09.026139 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-16 00:55:09.026149 | orchestrator | Tuesday 16 September 2025 00:54:22 +0000 (0:00:00.209) 0:09:52.346 ***** 2025-09-16 00:55:09.026156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-16 00:55:09.026162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-16 00:55:09.026168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-16 00:55:09.026175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-16 00:55:09.026181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-16 00:55:09.026190 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.026196 | orchestrator | 2025-09-16 00:55:09.026203 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-16 00:55:09.026209 | orchestrator | Tuesday 16 September 2025 00:54:23 +0000 (0:00:00.538) 0:09:52.885 ***** 2025-09-16 00:55:09.026215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-16 00:55:09.026222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-16 00:55:09.026228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-16 00:55:09.026234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-16 00:55:09.026240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-16 00:55:09.026250 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.026256 | orchestrator | 2025-09-16 00:55:09.026263 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-16 00:55:09.026269 | orchestrator | Tuesday 16 September 2025 00:54:23 +0000 (0:00:00.541) 0:09:53.426 ***** 2025-09-16 00:55:09.026275 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-16 00:55:09.026282 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-16 00:55:09.026288 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-16 00:55:09.026294 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-16 00:55:09.026301 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-16 00:55:09.026307 | orchestrator | 2025-09-16 00:55:09.026313 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-16 00:55:09.026319 | orchestrator | Tuesday 16 September 2025 00:54:55 +0000 (0:00:31.647) 0:10:25.074 ***** 2025-09-16 00:55:09.026325 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.026332 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.026338 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.026344 | orchestrator | 2025-09-16 00:55:09.026350 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-16 00:55:09.026357 | orchestrator | Tuesday 16 September 2025 00:54:55 +0000 (0:00:00.295) 0:10:25.369 ***** 2025-09-16 00:55:09.026366 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.026372 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.026379 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.026385 | orchestrator | 2025-09-16 00:55:09.026391 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-16 00:55:09.026398 | orchestrator | Tuesday 16 September 2025 00:54:56 +0000 (0:00:00.557) 0:10:25.927 ***** 2025-09-16 00:55:09.026404 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.026410 | orchestrator | 2025-09-16 00:55:09.026416 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-16 00:55:09.026422 | orchestrator | Tuesday 16 September 2025 00:54:57 +0000 (0:00:00.523) 0:10:26.450 ***** 2025-09-16 00:55:09.026429 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.026435 | orchestrator | 2025-09-16 00:55:09.026441 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-16 00:55:09.026447 | orchestrator | Tuesday 16 September 2025 00:54:57 +0000 (0:00:00.732) 0:10:27.182 ***** 2025-09-16 00:55:09.026453 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.026459 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.026465 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.026472 | orchestrator | 2025-09-16 00:55:09.026478 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-16 00:55:09.026484 | orchestrator | Tuesday 16 September 2025 00:54:58 +0000 (0:00:01.235) 0:10:28.418 ***** 2025-09-16 00:55:09.026490 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.026496 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.026502 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.026508 | orchestrator | 2025-09-16 00:55:09.026514 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-16 00:55:09.026521 | orchestrator | Tuesday 16 September 2025 00:55:00 +0000 (0:00:01.115) 0:10:29.533 ***** 2025-09-16 00:55:09.026527 | orchestrator | changed: [testbed-node-3] 2025-09-16 00:55:09.026533 | orchestrator | changed: [testbed-node-4] 2025-09-16 00:55:09.026539 | orchestrator | changed: [testbed-node-5] 2025-09-16 00:55:09.026545 | orchestrator | 2025-09-16 00:55:09.026551 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-16 00:55:09.026558 | orchestrator | Tuesday 16 September 2025 00:55:01 +0000 (0:00:01.741) 0:10:31.275 ***** 2025-09-16 00:55:09.026566 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-16 00:55:09.026573 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-16 00:55:09.026579 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-16 00:55:09.026585 | orchestrator | 2025-09-16 00:55:09.026592 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-16 00:55:09.026598 | orchestrator | Tuesday 16 September 2025 00:55:04 +0000 (0:00:02.763) 0:10:34.038 ***** 2025-09-16 00:55:09.026604 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.026610 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.026616 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.026623 | orchestrator | 2025-09-16 00:55:09.026629 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-16 00:55:09.026635 | orchestrator | Tuesday 16 September 2025 00:55:04 +0000 (0:00:00.308) 0:10:34.347 ***** 2025-09-16 00:55:09.026644 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:55:09.026650 | orchestrator | 2025-09-16 00:55:09.026657 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-16 00:55:09.026666 | orchestrator | Tuesday 16 September 2025 00:55:05 +0000 (0:00:00.744) 0:10:35.091 ***** 2025-09-16 00:55:09.026673 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.026679 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.026685 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.026692 | orchestrator | 2025-09-16 00:55:09.026698 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-16 00:55:09.026704 | orchestrator | Tuesday 16 September 2025 00:55:05 +0000 (0:00:00.301) 0:10:35.392 ***** 2025-09-16 00:55:09.026710 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.026717 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:55:09.026723 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:55:09.026729 | orchestrator | 2025-09-16 00:55:09.026736 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-16 00:55:09.026742 | orchestrator | Tuesday 16 September 2025 00:55:06 +0000 (0:00:00.318) 0:10:35.710 ***** 2025-09-16 00:55:09.026748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:55:09.026755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:55:09.026761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:55:09.026767 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:55:09.026774 | orchestrator | 2025-09-16 00:55:09.026780 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-16 00:55:09.026796 | orchestrator | Tuesday 16 September 2025 00:55:07 +0000 (0:00:01.031) 0:10:36.742 ***** 2025-09-16 00:55:09.026803 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:55:09.026809 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:55:09.026815 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:55:09.026821 | orchestrator | 2025-09-16 00:55:09.026827 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:55:09.026834 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-16 00:55:09.026840 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-16 00:55:09.026846 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-16 00:55:09.026853 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-16 00:55:09.026859 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-16 00:55:09.026865 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-16 00:55:09.026871 | orchestrator | 2025-09-16 00:55:09.026878 | orchestrator | 2025-09-16 00:55:09.026884 | orchestrator | 2025-09-16 00:55:09.026890 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:55:09.026896 | orchestrator | Tuesday 16 September 2025 00:55:07 +0000 (0:00:00.245) 0:10:36.988 ***** 2025-09-16 00:55:09.026902 | orchestrator | =============================================================================== 2025-09-16 00:55:09.026909 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 58.99s 2025-09-16 00:55:09.026915 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.25s 2025-09-16 00:55:09.026921 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.65s 2025-09-16 00:55:09.026927 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.80s 2025-09-16 00:55:09.026933 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.48s 2025-09-16 00:55:09.026939 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.76s 2025-09-16 00:55:09.026949 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.89s 2025-09-16 00:55:09.026955 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.81s 2025-09-16 00:55:09.026964 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.02s 2025-09-16 00:55:09.026970 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.87s 2025-09-16 00:55:09.026977 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.46s 2025-09-16 00:55:09.026983 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.94s 2025-09-16 00:55:09.026989 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.55s 2025-09-16 00:55:09.026995 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.44s 2025-09-16 00:55:09.027001 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.20s 2025-09-16 00:55:09.027007 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.93s 2025-09-16 00:55:09.027013 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.71s 2025-09-16 00:55:09.027019 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.58s 2025-09-16 00:55:09.027026 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.51s 2025-09-16 00:55:09.027032 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.48s 2025-09-16 00:55:09.027041 | orchestrator | 2025-09-16 00:55:08 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:09.027048 | orchestrator | 2025-09-16 00:55:08 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:09.027054 | orchestrator | 2025-09-16 00:55:08 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:12.048579 | orchestrator | 2025-09-16 00:55:12 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:12.050990 | orchestrator | 2025-09-16 00:55:12 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:12.055263 | orchestrator | 2025-09-16 00:55:12 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:12.055287 | orchestrator | 2025-09-16 00:55:12 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:15.109366 | orchestrator | 2025-09-16 00:55:15 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:15.112311 | orchestrator | 2025-09-16 00:55:15 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:15.114717 | orchestrator | 2025-09-16 00:55:15 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:15.114742 | orchestrator | 2025-09-16 00:55:15 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:18.148133 | orchestrator | 2025-09-16 00:55:18 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:18.149463 | orchestrator | 2025-09-16 00:55:18 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:18.152339 | orchestrator | 2025-09-16 00:55:18 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:18.152690 | orchestrator | 2025-09-16 00:55:18 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:21.191277 | orchestrator | 2025-09-16 00:55:21 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:21.192424 | orchestrator | 2025-09-16 00:55:21 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:21.194160 | orchestrator | 2025-09-16 00:55:21 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:21.194220 | orchestrator | 2025-09-16 00:55:21 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:24.244961 | orchestrator | 2025-09-16 00:55:24 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:24.247405 | orchestrator | 2025-09-16 00:55:24 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:24.249610 | orchestrator | 2025-09-16 00:55:24 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:24.250116 | orchestrator | 2025-09-16 00:55:24 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:27.305586 | orchestrator | 2025-09-16 00:55:27 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:27.308714 | orchestrator | 2025-09-16 00:55:27 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:27.309559 | orchestrator | 2025-09-16 00:55:27 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:27.309582 | orchestrator | 2025-09-16 00:55:27 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:30.355466 | orchestrator | 2025-09-16 00:55:30 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:30.362504 | orchestrator | 2025-09-16 00:55:30 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:30.364539 | orchestrator | 2025-09-16 00:55:30 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:30.364562 | orchestrator | 2025-09-16 00:55:30 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:33.409243 | orchestrator | 2025-09-16 00:55:33 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:33.410883 | orchestrator | 2025-09-16 00:55:33 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:33.413059 | orchestrator | 2025-09-16 00:55:33 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:33.413080 | orchestrator | 2025-09-16 00:55:33 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:36.464289 | orchestrator | 2025-09-16 00:55:36 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:36.464913 | orchestrator | 2025-09-16 00:55:36 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:36.466421 | orchestrator | 2025-09-16 00:55:36 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:36.466716 | orchestrator | 2025-09-16 00:55:36 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:39.515313 | orchestrator | 2025-09-16 00:55:39 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:39.518634 | orchestrator | 2025-09-16 00:55:39 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:39.523341 | orchestrator | 2025-09-16 00:55:39 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:39.523368 | orchestrator | 2025-09-16 00:55:39 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:42.567925 | orchestrator | 2025-09-16 00:55:42 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:42.569229 | orchestrator | 2025-09-16 00:55:42 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:42.570950 | orchestrator | 2025-09-16 00:55:42 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:42.571236 | orchestrator | 2025-09-16 00:55:42 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:45.618563 | orchestrator | 2025-09-16 00:55:45 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:45.620981 | orchestrator | 2025-09-16 00:55:45 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:45.623942 | orchestrator | 2025-09-16 00:55:45 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:45.623990 | orchestrator | 2025-09-16 00:55:45 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:48.667273 | orchestrator | 2025-09-16 00:55:48 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:48.668942 | orchestrator | 2025-09-16 00:55:48 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:48.672188 | orchestrator | 2025-09-16 00:55:48 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:48.673353 | orchestrator | 2025-09-16 00:55:48 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:51.726696 | orchestrator | 2025-09-16 00:55:51 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:51.728352 | orchestrator | 2025-09-16 00:55:51 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:51.731549 | orchestrator | 2025-09-16 00:55:51 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:51.731578 | orchestrator | 2025-09-16 00:55:51 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:54.778258 | orchestrator | 2025-09-16 00:55:54 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:54.779719 | orchestrator | 2025-09-16 00:55:54 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:54.782199 | orchestrator | 2025-09-16 00:55:54 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:54.782224 | orchestrator | 2025-09-16 00:55:54 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:55:57.833359 | orchestrator | 2025-09-16 00:55:57 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:55:57.835385 | orchestrator | 2025-09-16 00:55:57 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:55:57.837022 | orchestrator | 2025-09-16 00:55:57 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state STARTED 2025-09-16 00:55:57.837546 | orchestrator | 2025-09-16 00:55:57 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:00.882533 | orchestrator | 2025-09-16 00:56:00 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:00.884314 | orchestrator | 2025-09-16 00:56:00 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:56:00.886563 | orchestrator | 2025-09-16 00:56:00 | INFO  | Task 4a864d5d-dde9-47a4-8097-94ba981fcf49 is in state SUCCESS 2025-09-16 00:56:00.890364 | orchestrator | 2025-09-16 00:56:00.890443 | orchestrator | 2025-09-16 00:56:00.890460 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 00:56:00.890473 | orchestrator | 2025-09-16 00:56:00.890485 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 00:56:00.890497 | orchestrator | Tuesday 16 September 2025 00:53:15 +0000 (0:00:00.272) 0:00:00.272 ***** 2025-09-16 00:56:00.890508 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:56:00.890521 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:56:00.890532 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:56:00.890543 | orchestrator | 2025-09-16 00:56:00.890554 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 00:56:00.890589 | orchestrator | Tuesday 16 September 2025 00:53:15 +0000 (0:00:00.285) 0:00:00.557 ***** 2025-09-16 00:56:00.890602 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-16 00:56:00.890613 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-16 00:56:00.890624 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-16 00:56:00.890635 | orchestrator | 2025-09-16 00:56:00.890646 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-16 00:56:00.890657 | orchestrator | 2025-09-16 00:56:00.890669 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-16 00:56:00.890680 | orchestrator | Tuesday 16 September 2025 00:53:16 +0000 (0:00:00.407) 0:00:00.965 ***** 2025-09-16 00:56:00.890691 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:56:00.890702 | orchestrator | 2025-09-16 00:56:00.890713 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-16 00:56:00.890724 | orchestrator | Tuesday 16 September 2025 00:53:16 +0000 (0:00:00.509) 0:00:01.474 ***** 2025-09-16 00:56:00.890735 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-16 00:56:00.890746 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-16 00:56:00.890756 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-16 00:56:00.890767 | orchestrator | 2025-09-16 00:56:00.890779 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-16 00:56:00.890790 | orchestrator | Tuesday 16 September 2025 00:53:17 +0000 (0:00:00.655) 0:00:02.130 ***** 2025-09-16 00:56:00.890839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-16 00:56:00.890865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-16 00:56:00.890919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-16 00:56:00.890949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-16 00:56:00.890965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-16 00:56:00.890980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-16 00:56:00.890995 | orchestrator | 2025-09-16 00:56:00.891011 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-16 00:56:00.891024 | orchestrator | Tuesday 16 September 2025 00:53:19 +0000 (0:00:01.774) 0:00:03.904 ***** 2025-09-16 00:56:00.891043 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:56:00.891056 | orchestrator | 2025-09-16 00:56:00.891069 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-16 00:56:00.891088 | orchestrator | Tuesday 16 September 2025 00:53:19 +0000 (0:00:00.490) 0:00:04.395 ***** 2025-09-16 00:56:00.891111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-16 00:56:00.891126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-16 00:56:00.891140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-16 00:56:00.891153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-16 00:56:00.891179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-16 00:56:00.891201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-16 00:56:00.891213 | orchestrator | 2025-09-16 00:56:00.891226 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-16 00:56:00.891239 | orchestrator | Tuesday 16 September 2025 00:53:22 +0000 (0:00:02.832) 0:00:07.227 ***** 2025-09-16 00:56:00.891252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-16 00:56:00.891265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-16 00:56:00.891286 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:00.891303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-16 00:56:00.891323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-16 00:56:00.891335 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:00.891347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-16 00:56:00.891359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-16 00:56:00.891370 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:00.891388 | orchestrator | 2025-09-16 00:56:00.891399 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-16 00:56:00.891409 | orchestrator | Tuesday 16 September 2025 00:53:23 +0000 (0:00:00.969) 0:00:08.197 ***** 2025-09-16 00:56:00.891426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-16 00:56:00.891446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-16 00:56:00.891458 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:00.891469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-16 00:56:00.891481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-16 00:56:00.891506 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:00.891522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-16 00:56:00.891542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-16 00:56:00.891554 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:00.891565 | orchestrator | 2025-09-16 00:56:00.891576 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-16 00:56:00.891587 | orchestrator | Tuesday 16 September 2025 00:53:24 +0000 (0:00:01.121) 0:00:09.319 ***** 2025-09-16 00:56:00.891598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-16 00:56:00.891610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-16 00:56:00.891635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-16 00:56:00.891653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-16 00:56:00.891667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-16 00:56:00.891679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-16 00:56:00.891697 | orchestrator | 2025-09-16 00:56:00.891708 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-16 00:56:00.891719 | orchestrator | Tuesday 16 September 2025 00:53:27 +0000 (0:00:02.417) 0:00:11.736 ***** 2025-09-16 00:56:00.891730 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:56:00.891741 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:00.891752 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:56:00.891763 | orchestrator | 2025-09-16 00:56:00.891773 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-16 00:56:00.891784 | orchestrator | Tuesday 16 September 2025 00:53:29 +0000 (0:00:02.599) 0:00:14.335 ***** 2025-09-16 00:56:00.891815 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:00.891826 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:56:00.891837 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:56:00.891848 | orchestrator | 2025-09-16 00:56:00.891858 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-16 00:56:00.891869 | orchestrator | Tuesday 16 September 2025 00:53:31 +0000 (0:00:01.835) 0:00:16.171 ***** 2025-09-16 00:56:00.891886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-16 00:56:00.891905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-16 00:56:00.891918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-16 00:56:00.891929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-16 00:56:00.891953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-16 00:56:00.891972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-16 00:56:00.891984 | orchestrator | 2025-09-16 00:56:00.891995 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-16 00:56:00.892006 | orchestrator | Tuesday 16 September 2025 00:53:33 +0000 (0:00:02.382) 0:00:18.553 ***** 2025-09-16 00:56:00.892017 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:00.892028 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:00.892038 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:00.892049 | orchestrator | 2025-09-16 00:56:00.892060 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-16 00:56:00.892070 | orchestrator | Tuesday 16 September 2025 00:53:34 +0000 (0:00:00.275) 0:00:18.829 ***** 2025-09-16 00:56:00.892081 | orchestrator | 2025-09-16 00:56:00.892092 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-16 00:56:00.892102 | orchestrator | Tuesday 16 September 2025 00:53:34 +0000 (0:00:00.068) 0:00:18.897 ***** 2025-09-16 00:56:00.892113 | orchestrator | 2025-09-16 00:56:00.892124 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-16 00:56:00.892134 | orchestrator | Tuesday 16 September 2025 00:53:34 +0000 (0:00:00.067) 0:00:18.965 ***** 2025-09-16 00:56:00.892151 | orchestrator | 2025-09-16 00:56:00.892162 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-16 00:56:00.892172 | orchestrator | Tuesday 16 September 2025 00:53:34 +0000 (0:00:00.063) 0:00:19.028 ***** 2025-09-16 00:56:00.892183 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:00.892194 | orchestrator | 2025-09-16 00:56:00.892204 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-16 00:56:00.892215 | orchestrator | Tuesday 16 September 2025 00:53:34 +0000 (0:00:00.223) 0:00:19.251 ***** 2025-09-16 00:56:00.892226 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:00.892236 | orchestrator | 2025-09-16 00:56:00.892247 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-16 00:56:00.892258 | orchestrator | Tuesday 16 September 2025 00:53:35 +0000 (0:00:00.545) 0:00:19.796 ***** 2025-09-16 00:56:00.892268 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:00.892279 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:56:00.892290 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:56:00.892300 | orchestrator | 2025-09-16 00:56:00.892311 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-16 00:56:00.892321 | orchestrator | Tuesday 16 September 2025 00:54:33 +0000 (0:00:58.036) 0:01:17.833 ***** 2025-09-16 00:56:00.892332 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:00.892343 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:56:00.892354 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:56:00.892364 | orchestrator | 2025-09-16 00:56:00.892375 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-16 00:56:00.892386 | orchestrator | Tuesday 16 September 2025 00:55:49 +0000 (0:01:15.956) 0:02:33.790 ***** 2025-09-16 00:56:00.892396 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:56:00.892407 | orchestrator | 2025-09-16 00:56:00.892418 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-16 00:56:00.892428 | orchestrator | Tuesday 16 September 2025 00:55:49 +0000 (0:00:00.480) 0:02:34.270 ***** 2025-09-16 00:56:00.892439 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:56:00.892450 | orchestrator | 2025-09-16 00:56:00.892460 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-16 00:56:00.892471 | orchestrator | Tuesday 16 September 2025 00:55:52 +0000 (0:00:02.746) 0:02:37.016 ***** 2025-09-16 00:56:00.892481 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:56:00.892492 | orchestrator | 2025-09-16 00:56:00.892503 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-16 00:56:00.892513 | orchestrator | Tuesday 16 September 2025 00:55:54 +0000 (0:00:02.297) 0:02:39.314 ***** 2025-09-16 00:56:00.892524 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:00.892534 | orchestrator | 2025-09-16 00:56:00.892545 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-16 00:56:00.892560 | orchestrator | Tuesday 16 September 2025 00:55:57 +0000 (0:00:02.703) 0:02:42.018 ***** 2025-09-16 00:56:00.892571 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:00.892582 | orchestrator | 2025-09-16 00:56:00.892593 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:56:00.892604 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-16 00:56:00.892616 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-16 00:56:00.892627 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-16 00:56:00.892638 | orchestrator | 2025-09-16 00:56:00.892649 | orchestrator | 2025-09-16 00:56:00.892660 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:56:00.892682 | orchestrator | Tuesday 16 September 2025 00:55:59 +0000 (0:00:02.578) 0:02:44.596 ***** 2025-09-16 00:56:00.892694 | orchestrator | =============================================================================== 2025-09-16 00:56:00.892704 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 75.96s 2025-09-16 00:56:00.892715 | orchestrator | opensearch : Restart opensearch container ------------------------------ 58.04s 2025-09-16 00:56:00.892726 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.83s 2025-09-16 00:56:00.892737 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.75s 2025-09-16 00:56:00.892747 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.70s 2025-09-16 00:56:00.892758 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.60s 2025-09-16 00:56:00.892769 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.58s 2025-09-16 00:56:00.892779 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.42s 2025-09-16 00:56:00.892790 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.38s 2025-09-16 00:56:00.892817 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.30s 2025-09-16 00:56:00.892828 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.84s 2025-09-16 00:56:00.892838 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.77s 2025-09-16 00:56:00.892849 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.12s 2025-09-16 00:56:00.892860 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.97s 2025-09-16 00:56:00.892870 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.66s 2025-09-16 00:56:00.892881 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.55s 2025-09-16 00:56:00.892892 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-09-16 00:56:00.892902 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2025-09-16 00:56:00.892913 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2025-09-16 00:56:00.892923 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2025-09-16 00:56:00.892934 | orchestrator | 2025-09-16 00:56:00 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:03.936758 | orchestrator | 2025-09-16 00:56:03 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:03.939749 | orchestrator | 2025-09-16 00:56:03 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:56:03.940108 | orchestrator | 2025-09-16 00:56:03 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:06.986563 | orchestrator | 2025-09-16 00:56:06 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:06.990085 | orchestrator | 2025-09-16 00:56:06 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:56:06.990132 | orchestrator | 2025-09-16 00:56:06 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:10.036585 | orchestrator | 2025-09-16 00:56:10 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:10.038852 | orchestrator | 2025-09-16 00:56:10 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:56:10.039595 | orchestrator | 2025-09-16 00:56:10 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:13.084638 | orchestrator | 2025-09-16 00:56:13 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:13.088154 | orchestrator | 2025-09-16 00:56:13 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:56:13.088215 | orchestrator | 2025-09-16 00:56:13 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:16.132609 | orchestrator | 2025-09-16 00:56:16 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:16.132731 | orchestrator | 2025-09-16 00:56:16 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:56:16.132747 | orchestrator | 2025-09-16 00:56:16 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:19.178442 | orchestrator | 2025-09-16 00:56:19 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:19.179994 | orchestrator | 2025-09-16 00:56:19 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:56:19.180024 | orchestrator | 2025-09-16 00:56:19 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:22.223181 | orchestrator | 2025-09-16 00:56:22 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:22.224364 | orchestrator | 2025-09-16 00:56:22 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state STARTED 2025-09-16 00:56:22.224392 | orchestrator | 2025-09-16 00:56:22 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:25.270914 | orchestrator | 2025-09-16 00:56:25 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:25.272496 | orchestrator | 2025-09-16 00:56:25 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:56:25.281009 | orchestrator | 2025-09-16 00:56:25.281079 | orchestrator | 2025-09-16 00:56:25.281162 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-16 00:56:25.281274 | orchestrator | 2025-09-16 00:56:25.281296 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-16 00:56:25.281316 | orchestrator | Tuesday 16 September 2025 00:53:15 +0000 (0:00:00.110) 0:00:00.110 ***** 2025-09-16 00:56:25.281335 | orchestrator | ok: [localhost] => { 2025-09-16 00:56:25.281357 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-16 00:56:25.281376 | orchestrator | } 2025-09-16 00:56:25.281395 | orchestrator | 2025-09-16 00:56:25.281414 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-16 00:56:25.281434 | orchestrator | Tuesday 16 September 2025 00:53:15 +0000 (0:00:00.053) 0:00:00.163 ***** 2025-09-16 00:56:25.281451 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-16 00:56:25.281465 | orchestrator | ...ignoring 2025-09-16 00:56:25.281477 | orchestrator | 2025-09-16 00:56:25.281488 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-16 00:56:25.281499 | orchestrator | Tuesday 16 September 2025 00:53:18 +0000 (0:00:02.875) 0:00:03.039 ***** 2025-09-16 00:56:25.281510 | orchestrator | skipping: [localhost] 2025-09-16 00:56:25.281521 | orchestrator | 2025-09-16 00:56:25.281532 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-16 00:56:25.281542 | orchestrator | Tuesday 16 September 2025 00:53:18 +0000 (0:00:00.041) 0:00:03.081 ***** 2025-09-16 00:56:25.281554 | orchestrator | ok: [localhost] 2025-09-16 00:56:25.281565 | orchestrator | 2025-09-16 00:56:25.281576 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 00:56:25.281587 | orchestrator | 2025-09-16 00:56:25.281597 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 00:56:25.281608 | orchestrator | Tuesday 16 September 2025 00:53:18 +0000 (0:00:00.158) 0:00:03.239 ***** 2025-09-16 00:56:25.281619 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:56:25.281631 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:56:25.281641 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:56:25.281683 | orchestrator | 2025-09-16 00:56:25.281702 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 00:56:25.281721 | orchestrator | Tuesday 16 September 2025 00:53:19 +0000 (0:00:00.295) 0:00:03.535 ***** 2025-09-16 00:56:25.281741 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-16 00:56:25.281760 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-16 00:56:25.281776 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-16 00:56:25.281788 | orchestrator | 2025-09-16 00:56:25.281798 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-16 00:56:25.281870 | orchestrator | 2025-09-16 00:56:25.281890 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-16 00:56:25.281903 | orchestrator | Tuesday 16 September 2025 00:53:19 +0000 (0:00:00.552) 0:00:04.088 ***** 2025-09-16 00:56:25.281920 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-16 00:56:25.281939 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-16 00:56:25.281959 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-16 00:56:25.281983 | orchestrator | 2025-09-16 00:56:25.282009 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-16 00:56:25.282088 | orchestrator | Tuesday 16 September 2025 00:53:20 +0000 (0:00:00.409) 0:00:04.497 ***** 2025-09-16 00:56:25.282103 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:56:25.282116 | orchestrator | 2025-09-16 00:56:25.282127 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-16 00:56:25.282138 | orchestrator | Tuesday 16 September 2025 00:53:20 +0000 (0:00:00.608) 0:00:05.106 ***** 2025-09-16 00:56:25.282191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-16 00:56:25.282210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-16 00:56:25.282245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-16 00:56:25.282259 | orchestrator | 2025-09-16 00:56:25.282277 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-16 00:56:25.282289 | orchestrator | Tuesday 16 September 2025 00:53:23 +0000 (0:00:03.001) 0:00:08.107 ***** 2025-09-16 00:56:25.282300 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.282311 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:25.282322 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.282333 | orchestrator | 2025-09-16 00:56:25.282343 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-16 00:56:25.282354 | orchestrator | Tuesday 16 September 2025 00:53:24 +0000 (0:00:00.727) 0:00:08.835 ***** 2025-09-16 00:56:25.282364 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.282375 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.282386 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:25.282403 | orchestrator | 2025-09-16 00:56:25.282414 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-16 00:56:25.282425 | orchestrator | Tuesday 16 September 2025 00:53:25 +0000 (0:00:01.470) 0:00:10.305 ***** 2025-09-16 00:56:25.282437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-16 00:56:25.282462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-16 00:56:25.282476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-16 00:56:25.282494 | orchestrator | 2025-09-16 00:56:25.282505 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-16 00:56:25.282515 | orchestrator | Tuesday 16 September 2025 00:53:29 +0000 (0:00:03.585) 0:00:13.891 ***** 2025-09-16 00:56:25.282526 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.282537 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.282548 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:25.282558 | orchestrator | 2025-09-16 00:56:25.282569 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-16 00:56:25.282579 | orchestrator | Tuesday 16 September 2025 00:53:30 +0000 (0:00:01.143) 0:00:15.034 ***** 2025-09-16 00:56:25.282590 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:56:25.282600 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:25.282611 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:56:25.282621 | orchestrator | 2025-09-16 00:56:25.282632 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-16 00:56:25.282642 | orchestrator | Tuesday 16 September 2025 00:53:34 +0000 (0:00:04.065) 0:00:19.100 ***** 2025-09-16 00:56:25.282653 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:56:25.282664 | orchestrator | 2025-09-16 00:56:25.282675 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-16 00:56:25.282685 | orchestrator | Tuesday 16 September 2025 00:53:35 +0000 (0:00:00.485) 0:00:19.585 ***** 2025-09-16 00:56:25.282710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-16 00:56:25.282730 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:25.282743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-16 00:56:25.282754 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.282778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-16 00:56:25.282798 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.282869 | orchestrator | 2025-09-16 00:56:25.282881 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-16 00:56:25.282892 | orchestrator | Tuesday 16 September 2025 00:53:38 +0000 (0:00:03.509) 0:00:23.094 ***** 2025-09-16 00:56:25.282904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-16 00:56:25.282916 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:25.282940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-16 00:56:25.282959 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.282974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-16 00:56:25.282992 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.283009 | orchestrator | 2025-09-16 00:56:25.283026 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-16 00:56:25.283042 | orchestrator | Tuesday 16 September 2025 00:53:41 +0000 (0:00:02.848) 0:00:25.943 ***** 2025-09-16 00:56:25.283077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-16 00:56:25.283106 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:25.283137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-16 00:56:25.283150 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.283166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-16 00:56:25.283183 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.283192 | orchestrator | 2025-09-16 00:56:25.283202 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-16 00:56:25.283211 | orchestrator | Tuesday 16 September 2025 00:53:44 +0000 (0:00:02.519) 0:00:28.463 ***** 2025-09-16 00:56:25.283228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/l2025-09-16 00:56:25 | INFO  | Task 6d2610e5-0cda-4192-8f43-3a389dfb1176 is in state SUCCESS 2025-09-16 00:56:25.283241 | orchestrator | ocaltime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-16 00:56:25.283257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-16 00:56:25.283283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-16 00:56:25.283294 | orchestrator | 2025-09-16 00:56:25.283304 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-16 00:56:25.283314 | orchestrator | Tuesday 16 September 2025 00:53:47 +0000 (0:00:03.095) 0:00:31.559 ***** 2025-09-16 00:56:25.283323 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:25.283333 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:56:25.283342 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:56:25.283352 | orchestrator | 2025-09-16 00:56:25.283361 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-16 00:56:25.283371 | orchestrator | Tuesday 16 September 2025 00:53:47 +0000 (0:00:00.728) 0:00:32.288 ***** 2025-09-16 00:56:25.283380 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:56:25.283390 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:56:25.283399 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:56:25.283409 | orchestrator | 2025-09-16 00:56:25.283418 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-16 00:56:25.283427 | orchestrator | Tuesday 16 September 2025 00:53:48 +0000 (0:00:00.401) 0:00:32.689 ***** 2025-09-16 00:56:25.283437 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:56:25.283448 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:56:25.283465 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:56:25.283492 | orchestrator | 2025-09-16 00:56:25.283508 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-16 00:56:25.283523 | orchestrator | Tuesday 16 September 2025 00:53:48 +0000 (0:00:00.287) 0:00:32.977 ***** 2025-09-16 00:56:25.283539 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-16 00:56:25.283566 | orchestrator | ...ignoring 2025-09-16 00:56:25.283585 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-16 00:56:25.283601 | orchestrator | ...ignoring 2025-09-16 00:56:25.283611 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-16 00:56:25.283620 | orchestrator | ...ignoring 2025-09-16 00:56:25.283630 | orchestrator | 2025-09-16 00:56:25.283640 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-16 00:56:25.283650 | orchestrator | Tuesday 16 September 2025 00:53:59 +0000 (0:00:10.829) 0:00:43.807 ***** 2025-09-16 00:56:25.283659 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:56:25.283669 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:56:25.283679 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:56:25.283688 | orchestrator | 2025-09-16 00:56:25.283698 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-16 00:56:25.283713 | orchestrator | Tuesday 16 September 2025 00:53:59 +0000 (0:00:00.426) 0:00:44.233 ***** 2025-09-16 00:56:25.283723 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:25.283733 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.283742 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.283752 | orchestrator | 2025-09-16 00:56:25.283761 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-16 00:56:25.283771 | orchestrator | Tuesday 16 September 2025 00:54:00 +0000 (0:00:00.674) 0:00:44.907 ***** 2025-09-16 00:56:25.283780 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:25.283790 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.283824 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.283842 | orchestrator | 2025-09-16 00:56:25.283860 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-16 00:56:25.283876 | orchestrator | Tuesday 16 September 2025 00:54:00 +0000 (0:00:00.436) 0:00:45.343 ***** 2025-09-16 00:56:25.283888 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:25.283897 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.283907 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.283916 | orchestrator | 2025-09-16 00:56:25.283926 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-16 00:56:25.283935 | orchestrator | Tuesday 16 September 2025 00:54:01 +0000 (0:00:00.409) 0:00:45.753 ***** 2025-09-16 00:56:25.283945 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:56:25.283955 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:56:25.283965 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:56:25.283974 | orchestrator | 2025-09-16 00:56:25.283992 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-16 00:56:25.284002 | orchestrator | Tuesday 16 September 2025 00:54:01 +0000 (0:00:00.487) 0:00:46.240 ***** 2025-09-16 00:56:25.284012 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:25.284021 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.284031 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.284040 | orchestrator | 2025-09-16 00:56:25.284050 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-16 00:56:25.284060 | orchestrator | Tuesday 16 September 2025 00:54:02 +0000 (0:00:00.588) 0:00:46.828 ***** 2025-09-16 00:56:25.284070 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.284079 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.284089 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-16 00:56:25.284099 | orchestrator | 2025-09-16 00:56:25.284108 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-16 00:56:25.284124 | orchestrator | Tuesday 16 September 2025 00:54:02 +0000 (0:00:00.369) 0:00:47.198 ***** 2025-09-16 00:56:25.284141 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:25.284157 | orchestrator | 2025-09-16 00:56:25.284172 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-16 00:56:25.284198 | orchestrator | Tuesday 16 September 2025 00:54:12 +0000 (0:00:09.721) 0:00:56.919 ***** 2025-09-16 00:56:25.284216 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:56:25.284232 | orchestrator | 2025-09-16 00:56:25.284248 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-16 00:56:25.284260 | orchestrator | Tuesday 16 September 2025 00:54:12 +0000 (0:00:00.121) 0:00:57.041 ***** 2025-09-16 00:56:25.284269 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:25.284279 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.284289 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.284298 | orchestrator | 2025-09-16 00:56:25.284308 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-16 00:56:25.284317 | orchestrator | Tuesday 16 September 2025 00:54:13 +0000 (0:00:00.899) 0:00:57.941 ***** 2025-09-16 00:56:25.284327 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:25.284336 | orchestrator | 2025-09-16 00:56:25.284345 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-16 00:56:25.284355 | orchestrator | Tuesday 16 September 2025 00:54:21 +0000 (0:00:07.637) 0:01:05.578 ***** 2025-09-16 00:56:25.284364 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:56:25.284373 | orchestrator | 2025-09-16 00:56:25.284383 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-16 00:56:25.284392 | orchestrator | Tuesday 16 September 2025 00:54:22 +0000 (0:00:01.574) 0:01:07.153 ***** 2025-09-16 00:56:25.284402 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:56:25.284411 | orchestrator | 2025-09-16 00:56:25.284421 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-16 00:56:25.284430 | orchestrator | Tuesday 16 September 2025 00:54:25 +0000 (0:00:02.462) 0:01:09.616 ***** 2025-09-16 00:56:25.284440 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:25.284449 | orchestrator | 2025-09-16 00:56:25.284458 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-16 00:56:25.284468 | orchestrator | Tuesday 16 September 2025 00:54:25 +0000 (0:00:00.135) 0:01:09.751 ***** 2025-09-16 00:56:25.284477 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:25.284487 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.284496 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.284505 | orchestrator | 2025-09-16 00:56:25.284515 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-16 00:56:25.284524 | orchestrator | Tuesday 16 September 2025 00:54:25 +0000 (0:00:00.322) 0:01:10.073 ***** 2025-09-16 00:56:25.284534 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:25.284543 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-16 00:56:25.284553 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:56:25.284562 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:56:25.284572 | orchestrator | 2025-09-16 00:56:25.284581 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-16 00:56:25.284591 | orchestrator | skipping: no hosts matched 2025-09-16 00:56:25.284601 | orchestrator | 2025-09-16 00:56:25.284618 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-16 00:56:25.284634 | orchestrator | 2025-09-16 00:56:25.284650 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-16 00:56:25.284666 | orchestrator | Tuesday 16 September 2025 00:54:26 +0000 (0:00:00.446) 0:01:10.520 ***** 2025-09-16 00:56:25.284689 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:56:25.284705 | orchestrator | 2025-09-16 00:56:25.284723 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-16 00:56:25.284736 | orchestrator | Tuesday 16 September 2025 00:54:43 +0000 (0:00:17.553) 0:01:28.073 ***** 2025-09-16 00:56:25.284751 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:56:25.284766 | orchestrator | 2025-09-16 00:56:25.284782 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-16 00:56:25.284876 | orchestrator | Tuesday 16 September 2025 00:55:04 +0000 (0:00:20.622) 0:01:48.695 ***** 2025-09-16 00:56:25.284895 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:56:25.284910 | orchestrator | 2025-09-16 00:56:25.284925 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-16 00:56:25.284939 | orchestrator | 2025-09-16 00:56:25.284951 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-16 00:56:25.284965 | orchestrator | Tuesday 16 September 2025 00:55:06 +0000 (0:00:02.387) 0:01:51.082 ***** 2025-09-16 00:56:25.284977 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:56:25.284991 | orchestrator | 2025-09-16 00:56:25.285005 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-16 00:56:25.285017 | orchestrator | Tuesday 16 September 2025 00:55:26 +0000 (0:00:19.823) 0:02:10.906 ***** 2025-09-16 00:56:25.285031 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:56:25.285044 | orchestrator | 2025-09-16 00:56:25.285057 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-16 00:56:25.285071 | orchestrator | Tuesday 16 September 2025 00:55:48 +0000 (0:00:21.566) 0:02:32.473 ***** 2025-09-16 00:56:25.285089 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:56:25.285097 | orchestrator | 2025-09-16 00:56:25.285105 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-16 00:56:25.285113 | orchestrator | 2025-09-16 00:56:25.285120 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-16 00:56:25.285128 | orchestrator | Tuesday 16 September 2025 00:55:50 +0000 (0:00:02.437) 0:02:34.910 ***** 2025-09-16 00:56:25.285136 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:25.285144 | orchestrator | 2025-09-16 00:56:25.285151 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-16 00:56:25.285159 | orchestrator | Tuesday 16 September 2025 00:56:02 +0000 (0:00:11.914) 0:02:46.824 ***** 2025-09-16 00:56:25.285167 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:56:25.285174 | orchestrator | 2025-09-16 00:56:25.285182 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-16 00:56:25.285190 | orchestrator | Tuesday 16 September 2025 00:56:06 +0000 (0:00:04.536) 0:02:51.361 ***** 2025-09-16 00:56:25.285197 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:56:25.285205 | orchestrator | 2025-09-16 00:56:25.285213 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-16 00:56:25.285220 | orchestrator | 2025-09-16 00:56:25.285228 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-16 00:56:25.285236 | orchestrator | Tuesday 16 September 2025 00:56:09 +0000 (0:00:02.719) 0:02:54.080 ***** 2025-09-16 00:56:25.285243 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:56:25.285251 | orchestrator | 2025-09-16 00:56:25.285259 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-16 00:56:25.285266 | orchestrator | Tuesday 16 September 2025 00:56:10 +0000 (0:00:00.527) 0:02:54.608 ***** 2025-09-16 00:56:25.285274 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.285282 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.285290 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:25.285297 | orchestrator | 2025-09-16 00:56:25.285305 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-16 00:56:25.285313 | orchestrator | Tuesday 16 September 2025 00:56:12 +0000 (0:00:02.334) 0:02:56.942 ***** 2025-09-16 00:56:25.285320 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.285328 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.285336 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:25.285345 | orchestrator | 2025-09-16 00:56:25.285359 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-16 00:56:25.285372 | orchestrator | Tuesday 16 September 2025 00:56:14 +0000 (0:00:02.274) 0:02:59.216 ***** 2025-09-16 00:56:25.285384 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.285397 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.285419 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:25.285433 | orchestrator | 2025-09-16 00:56:25.285447 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-16 00:56:25.285461 | orchestrator | Tuesday 16 September 2025 00:56:17 +0000 (0:00:02.278) 0:03:01.494 ***** 2025-09-16 00:56:25.285471 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.285480 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.285488 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:56:25.285495 | orchestrator | 2025-09-16 00:56:25.285503 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-16 00:56:25.285511 | orchestrator | Tuesday 16 September 2025 00:56:19 +0000 (0:00:02.258) 0:03:03.752 ***** 2025-09-16 00:56:25.285519 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:56:25.285527 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:56:25.285535 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:56:25.285543 | orchestrator | 2025-09-16 00:56:25.285550 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-16 00:56:25.285558 | orchestrator | Tuesday 16 September 2025 00:56:22 +0000 (0:00:02.796) 0:03:06.549 ***** 2025-09-16 00:56:25.285566 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:56:25.285574 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:56:25.285582 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:56:25.285590 | orchestrator | 2025-09-16 00:56:25.285598 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:56:25.285606 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-16 00:56:25.285621 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-16 00:56:25.285630 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-16 00:56:25.285638 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-16 00:56:25.285646 | orchestrator | 2025-09-16 00:56:25.285656 | orchestrator | 2025-09-16 00:56:25.285670 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:56:25.285685 | orchestrator | Tuesday 16 September 2025 00:56:22 +0000 (0:00:00.398) 0:03:06.948 ***** 2025-09-16 00:56:25.285704 | orchestrator | =============================================================================== 2025-09-16 00:56:25.285717 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 42.19s 2025-09-16 00:56:25.285729 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.38s 2025-09-16 00:56:25.285748 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.91s 2025-09-16 00:56:25.285765 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.83s 2025-09-16 00:56:25.285778 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.72s 2025-09-16 00:56:25.285820 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.64s 2025-09-16 00:56:25.285830 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.82s 2025-09-16 00:56:25.285838 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.54s 2025-09-16 00:56:25.285846 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.07s 2025-09-16 00:56:25.285854 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.59s 2025-09-16 00:56:25.285862 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.51s 2025-09-16 00:56:25.285870 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.10s 2025-09-16 00:56:25.285878 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.00s 2025-09-16 00:56:25.285893 | orchestrator | Check MariaDB service --------------------------------------------------- 2.88s 2025-09-16 00:56:25.285901 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.85s 2025-09-16 00:56:25.285909 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.80s 2025-09-16 00:56:25.285917 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.72s 2025-09-16 00:56:25.285925 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.52s 2025-09-16 00:56:25.285933 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.46s 2025-09-16 00:56:25.285941 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.33s 2025-09-16 00:56:25.285949 | orchestrator | 2025-09-16 00:56:25 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:56:25.285957 | orchestrator | 2025-09-16 00:56:25 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:28.334291 | orchestrator | 2025-09-16 00:56:28 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:28.335072 | orchestrator | 2025-09-16 00:56:28 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:56:28.336234 | orchestrator | 2025-09-16 00:56:28 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:56:28.336265 | orchestrator | 2025-09-16 00:56:28 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:31.370865 | orchestrator | 2025-09-16 00:56:31 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:31.371575 | orchestrator | 2025-09-16 00:56:31 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:56:31.371940 | orchestrator | 2025-09-16 00:56:31 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:56:31.371979 | orchestrator | 2025-09-16 00:56:31 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:34.418134 | orchestrator | 2025-09-16 00:56:34 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:34.421444 | orchestrator | 2025-09-16 00:56:34 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:56:34.422687 | orchestrator | 2025-09-16 00:56:34 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:56:34.422714 | orchestrator | 2025-09-16 00:56:34 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:37.460269 | orchestrator | 2025-09-16 00:56:37 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:37.464682 | orchestrator | 2025-09-16 00:56:37 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:56:37.466224 | orchestrator | 2025-09-16 00:56:37 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:56:37.468378 | orchestrator | 2025-09-16 00:56:37 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:40.514771 | orchestrator | 2025-09-16 00:56:40 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:40.515157 | orchestrator | 2025-09-16 00:56:40 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:56:40.516355 | orchestrator | 2025-09-16 00:56:40 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:56:40.516403 | orchestrator | 2025-09-16 00:56:40 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:43.555227 | orchestrator | 2025-09-16 00:56:43 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:43.555523 | orchestrator | 2025-09-16 00:56:43 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:56:43.557246 | orchestrator | 2025-09-16 00:56:43 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:56:43.557340 | orchestrator | 2025-09-16 00:56:43 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:46.588721 | orchestrator | 2025-09-16 00:56:46 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:46.591162 | orchestrator | 2025-09-16 00:56:46 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:56:46.593895 | orchestrator | 2025-09-16 00:56:46 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:56:46.594124 | orchestrator | 2025-09-16 00:56:46 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:49.628936 | orchestrator | 2025-09-16 00:56:49 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:49.630177 | orchestrator | 2025-09-16 00:56:49 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:56:49.631490 | orchestrator | 2025-09-16 00:56:49 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:56:49.631831 | orchestrator | 2025-09-16 00:56:49 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:52.672661 | orchestrator | 2025-09-16 00:56:52 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:52.674572 | orchestrator | 2025-09-16 00:56:52 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:56:52.676136 | orchestrator | 2025-09-16 00:56:52 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:56:52.676342 | orchestrator | 2025-09-16 00:56:52 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:55.719259 | orchestrator | 2025-09-16 00:56:55 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:55.720704 | orchestrator | 2025-09-16 00:56:55 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:56:55.722348 | orchestrator | 2025-09-16 00:56:55 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:56:55.722452 | orchestrator | 2025-09-16 00:56:55 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:56:58.759709 | orchestrator | 2025-09-16 00:56:58 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:56:58.761426 | orchestrator | 2025-09-16 00:56:58 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:56:58.761459 | orchestrator | 2025-09-16 00:56:58 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:56:58.761471 | orchestrator | 2025-09-16 00:56:58 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:01.801163 | orchestrator | 2025-09-16 00:57:01 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:57:01.801755 | orchestrator | 2025-09-16 00:57:01 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:01.803401 | orchestrator | 2025-09-16 00:57:01 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:01.803934 | orchestrator | 2025-09-16 00:57:01 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:04.854304 | orchestrator | 2025-09-16 00:57:04 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:57:04.857429 | orchestrator | 2025-09-16 00:57:04 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:04.860091 | orchestrator | 2025-09-16 00:57:04 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:04.860119 | orchestrator | 2025-09-16 00:57:04 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:07.898316 | orchestrator | 2025-09-16 00:57:07 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:57:07.900300 | orchestrator | 2025-09-16 00:57:07 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:07.902438 | orchestrator | 2025-09-16 00:57:07 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:07.902670 | orchestrator | 2025-09-16 00:57:07 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:10.944420 | orchestrator | 2025-09-16 00:57:10 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:57:10.945938 | orchestrator | 2025-09-16 00:57:10 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:10.947503 | orchestrator | 2025-09-16 00:57:10 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:10.947830 | orchestrator | 2025-09-16 00:57:10 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:13.986808 | orchestrator | 2025-09-16 00:57:13 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:57:13.986968 | orchestrator | 2025-09-16 00:57:13 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:13.989243 | orchestrator | 2025-09-16 00:57:13 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:13.989267 | orchestrator | 2025-09-16 00:57:13 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:17.036347 | orchestrator | 2025-09-16 00:57:17 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state STARTED 2025-09-16 00:57:17.038102 | orchestrator | 2025-09-16 00:57:17 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:17.040150 | orchestrator | 2025-09-16 00:57:17 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:17.041230 | orchestrator | 2025-09-16 00:57:17 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:20.110371 | orchestrator | 2025-09-16 00:57:20 | INFO  | Task d8bb08e4-657b-4ea9-8b7b-14af06c9ee31 is in state SUCCESS 2025-09-16 00:57:20.113213 | orchestrator | 2025-09-16 00:57:20.113241 | orchestrator | 2025-09-16 00:57:20.113249 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-16 00:57:20.113256 | orchestrator | 2025-09-16 00:57:20.113263 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-16 00:57:20.113321 | orchestrator | Tuesday 16 September 2025 00:55:12 +0000 (0:00:00.617) 0:00:00.617 ***** 2025-09-16 00:57:20.113330 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:57:20.113338 | orchestrator | 2025-09-16 00:57:20.113344 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-16 00:57:20.113350 | orchestrator | Tuesday 16 September 2025 00:55:12 +0000 (0:00:00.635) 0:00:01.253 ***** 2025-09-16 00:57:20.113357 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:57:20.113363 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:57:20.113370 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:57:20.113376 | orchestrator | 2025-09-16 00:57:20.113382 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-16 00:57:20.113388 | orchestrator | Tuesday 16 September 2025 00:55:13 +0000 (0:00:00.657) 0:00:01.911 ***** 2025-09-16 00:57:20.113395 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:57:20.113421 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:57:20.113427 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:57:20.113434 | orchestrator | 2025-09-16 00:57:20.113440 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-16 00:57:20.113446 | orchestrator | Tuesday 16 September 2025 00:55:13 +0000 (0:00:00.269) 0:00:02.181 ***** 2025-09-16 00:57:20.113452 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:57:20.113458 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:57:20.113538 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:57:20.113545 | orchestrator | 2025-09-16 00:57:20.113579 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-16 00:57:20.113587 | orchestrator | Tuesday 16 September 2025 00:55:14 +0000 (0:00:00.748) 0:00:02.929 ***** 2025-09-16 00:57:20.113593 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:57:20.113636 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:57:20.113645 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:57:20.113651 | orchestrator | 2025-09-16 00:57:20.113657 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-16 00:57:20.113663 | orchestrator | Tuesday 16 September 2025 00:55:14 +0000 (0:00:00.294) 0:00:03.224 ***** 2025-09-16 00:57:20.113670 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:57:20.113676 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:57:20.113682 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:57:20.113688 | orchestrator | 2025-09-16 00:57:20.113694 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-16 00:57:20.114060 | orchestrator | Tuesday 16 September 2025 00:55:15 +0000 (0:00:00.308) 0:00:03.532 ***** 2025-09-16 00:57:20.114074 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:57:20.114080 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:57:20.114086 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:57:20.114092 | orchestrator | 2025-09-16 00:57:20.114120 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-16 00:57:20.114127 | orchestrator | Tuesday 16 September 2025 00:55:15 +0000 (0:00:00.294) 0:00:03.826 ***** 2025-09-16 00:57:20.114134 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.114141 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.114147 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.114153 | orchestrator | 2025-09-16 00:57:20.114160 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-16 00:57:20.114166 | orchestrator | Tuesday 16 September 2025 00:55:15 +0000 (0:00:00.356) 0:00:04.183 ***** 2025-09-16 00:57:20.114172 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:57:20.114178 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:57:20.114184 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:57:20.114191 | orchestrator | 2025-09-16 00:57:20.114197 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-16 00:57:20.114203 | orchestrator | Tuesday 16 September 2025 00:55:16 +0000 (0:00:00.253) 0:00:04.437 ***** 2025-09-16 00:57:20.114209 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-16 00:57:20.114216 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-16 00:57:20.114222 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-16 00:57:20.114228 | orchestrator | 2025-09-16 00:57:20.114234 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-16 00:57:20.114240 | orchestrator | Tuesday 16 September 2025 00:55:16 +0000 (0:00:00.549) 0:00:04.987 ***** 2025-09-16 00:57:20.114246 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:57:20.114253 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:57:20.114260 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:57:20.114266 | orchestrator | 2025-09-16 00:57:20.114272 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-16 00:57:20.114278 | orchestrator | Tuesday 16 September 2025 00:55:17 +0000 (0:00:00.354) 0:00:05.341 ***** 2025-09-16 00:57:20.114299 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-16 00:57:20.114314 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-16 00:57:20.114320 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-16 00:57:20.114326 | orchestrator | 2025-09-16 00:57:20.114332 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-16 00:57:20.114339 | orchestrator | Tuesday 16 September 2025 00:55:19 +0000 (0:00:02.030) 0:00:07.371 ***** 2025-09-16 00:57:20.114345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-16 00:57:20.114352 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-16 00:57:20.114358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-16 00:57:20.114364 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.114370 | orchestrator | 2025-09-16 00:57:20.114376 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-16 00:57:20.114408 | orchestrator | Tuesday 16 September 2025 00:55:19 +0000 (0:00:00.350) 0:00:07.721 ***** 2025-09-16 00:57:20.114417 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.114427 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.114434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.114440 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.114446 | orchestrator | 2025-09-16 00:57:20.114453 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-16 00:57:20.114459 | orchestrator | Tuesday 16 September 2025 00:55:20 +0000 (0:00:00.661) 0:00:08.383 ***** 2025-09-16 00:57:20.114467 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.114475 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.114486 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.114493 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.114499 | orchestrator | 2025-09-16 00:57:20.114505 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-16 00:57:20.114512 | orchestrator | Tuesday 16 September 2025 00:55:20 +0000 (0:00:00.147) 0:00:08.531 ***** 2025-09-16 00:57:20.114520 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0b6eb837a9c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-16 00:55:17.677553', 'end': '2025-09-16 00:55:17.724099', 'delta': '0:00:00.046546', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0b6eb837a9c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-16 00:57:20.114534 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'df74f9d355c2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-16 00:55:18.355344', 'end': '2025-09-16 00:55:18.398151', 'delta': '0:00:00.042807', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['df74f9d355c2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-16 00:57:20.114557 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '33d09c7f785d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-16 00:55:18.922668', 'end': '2025-09-16 00:55:18.964954', 'delta': '0:00:00.042286', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['33d09c7f785d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-16 00:57:20.114564 | orchestrator | 2025-09-16 00:57:20.114571 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-16 00:57:20.114577 | orchestrator | Tuesday 16 September 2025 00:55:20 +0000 (0:00:00.358) 0:00:08.889 ***** 2025-09-16 00:57:20.114584 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:57:20.114590 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:57:20.114596 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:57:20.114602 | orchestrator | 2025-09-16 00:57:20.114609 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-16 00:57:20.114615 | orchestrator | Tuesday 16 September 2025 00:55:21 +0000 (0:00:00.410) 0:00:09.300 ***** 2025-09-16 00:57:20.114621 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-16 00:57:20.114628 | orchestrator | 2025-09-16 00:57:20.114634 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-16 00:57:20.114641 | orchestrator | Tuesday 16 September 2025 00:55:22 +0000 (0:00:01.733) 0:00:11.033 ***** 2025-09-16 00:57:20.114649 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.114656 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.114663 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.114670 | orchestrator | 2025-09-16 00:57:20.114678 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-16 00:57:20.114684 | orchestrator | Tuesday 16 September 2025 00:55:23 +0000 (0:00:00.255) 0:00:11.288 ***** 2025-09-16 00:57:20.114692 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.114699 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.114706 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.114713 | orchestrator | 2025-09-16 00:57:20.114721 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-16 00:57:20.114728 | orchestrator | Tuesday 16 September 2025 00:55:23 +0000 (0:00:00.371) 0:00:11.660 ***** 2025-09-16 00:57:20.114739 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.114747 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.114754 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.114761 | orchestrator | 2025-09-16 00:57:20.114771 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-16 00:57:20.114778 | orchestrator | Tuesday 16 September 2025 00:55:23 +0000 (0:00:00.429) 0:00:12.090 ***** 2025-09-16 00:57:20.114785 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:57:20.114792 | orchestrator | 2025-09-16 00:57:20.114800 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-16 00:57:20.114807 | orchestrator | Tuesday 16 September 2025 00:55:23 +0000 (0:00:00.124) 0:00:12.214 ***** 2025-09-16 00:57:20.114832 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.114839 | orchestrator | 2025-09-16 00:57:20.114846 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-16 00:57:20.114854 | orchestrator | Tuesday 16 September 2025 00:55:24 +0000 (0:00:00.209) 0:00:12.423 ***** 2025-09-16 00:57:20.114861 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.114868 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.114875 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.114881 | orchestrator | 2025-09-16 00:57:20.114888 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-16 00:57:20.114895 | orchestrator | Tuesday 16 September 2025 00:55:24 +0000 (0:00:00.265) 0:00:12.688 ***** 2025-09-16 00:57:20.114902 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.114909 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.114917 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.114923 | orchestrator | 2025-09-16 00:57:20.114930 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-16 00:57:20.114937 | orchestrator | Tuesday 16 September 2025 00:55:24 +0000 (0:00:00.298) 0:00:12.987 ***** 2025-09-16 00:57:20.114944 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.114951 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.114958 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.114965 | orchestrator | 2025-09-16 00:57:20.114972 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-16 00:57:20.114979 | orchestrator | Tuesday 16 September 2025 00:55:25 +0000 (0:00:00.468) 0:00:13.455 ***** 2025-09-16 00:57:20.114986 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.114994 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.115001 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.115007 | orchestrator | 2025-09-16 00:57:20.115014 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-16 00:57:20.115020 | orchestrator | Tuesday 16 September 2025 00:55:25 +0000 (0:00:00.337) 0:00:13.793 ***** 2025-09-16 00:57:20.115026 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.115032 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.115038 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.115045 | orchestrator | 2025-09-16 00:57:20.115051 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-16 00:57:20.115057 | orchestrator | Tuesday 16 September 2025 00:55:25 +0000 (0:00:00.292) 0:00:14.085 ***** 2025-09-16 00:57:20.115063 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.115070 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.115076 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.115082 | orchestrator | 2025-09-16 00:57:20.115089 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-16 00:57:20.115111 | orchestrator | Tuesday 16 September 2025 00:55:26 +0000 (0:00:00.302) 0:00:14.388 ***** 2025-09-16 00:57:20.115118 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.115124 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.115130 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.115136 | orchestrator | 2025-09-16 00:57:20.115143 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-16 00:57:20.115154 | orchestrator | Tuesday 16 September 2025 00:55:26 +0000 (0:00:00.496) 0:00:14.884 ***** 2025-09-16 00:57:20.115161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8832b43a--4370--5f7f--b8ca--e1ef860202d6-osd--block--8832b43a--4370--5f7f--b8ca--e1ef860202d6', 'dm-uuid-LVM-YcgXbQSLW6T92S6r08xR6FKW11TasuSzM1boHuyKTVrqUfc58vek5nzVrYSc131l'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b409e677--b998--57d2--be40--43b65c9fb72d-osd--block--b409e677--b998--57d2--be40--43b65c9fb72d', 'dm-uuid-LVM-gbuhjycd69TX34wcCVoOmjlpPQ8wKcDDF2HTNd5TzUHszt21u4Oo8BSW9v29cmec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a154e298--15cb--5d50--9a1c--17bc1371db7e-osd--block--a154e298--15cb--5d50--9a1c--17bc1371db7e', 'dm-uuid-LVM-EFbsuN8afaIRpM6v16JYlvMAjTlWagjCgfIoPoiTnRbMaFJFK1uNEn8SJIoQO836'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56010334--63d7--5603--a2fe--432c47d6dcb8-osd--block--56010334--63d7--5603--a2fe--432c47d6dcb8', 'dm-uuid-LVM-ucs6vcOg2JldR43Dv3HJMWOQXgxk4Rjo7I7oTzMQaB9pBSV82lBHT1wVuGjA34S1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:57:20.115299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8832b43a--4370--5f7f--b8ca--e1ef860202d6-osd--block--8832b43a--4370--5f7f--b8ca--e1ef860202d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WHoXZD-QS5F-ff1Z-a1ef-ziTD-c1GW-4CB7Fq', 'scsi-0QEMU_QEMU_HARDDISK_216f9756-46fe-48b3-8a57-6cc5b7e0c275', 'scsi-SQEMU_QEMU_HARDDISK_216f9756-46fe-48b3-8a57-6cc5b7e0c275'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:57:20.115313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b409e677--b998--57d2--be40--43b65c9fb72d-osd--block--b409e677--b998--57d2--be40--43b65c9fb72d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3K9cLz-vSxn-jDgK-Vo6h-4rad-CN2R-XamGFe', 'scsi-0QEMU_QEMU_HARDDISK_ebe7fd99-ddf0-4119-8dea-cb8b427f2aed', 'scsi-SQEMU_QEMU_HARDDISK_ebe7fd99-ddf0-4119-8dea-cb8b427f2aed'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:57:20.115330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b7c66eb-e150-40bb-863f-cd4924cbb0ab', 'scsi-SQEMU_QEMU_HARDDISK_6b7c66eb-e150-40bb-863f-cd4924cbb0ab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:57:20.115360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:57:20.115372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115385 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.115392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--457b984f--2001--5589--9984--9a697803acd2-osd--block--457b984f--2001--5589--9984--9a697803acd2', 'dm-uuid-LVM-tzkPODnltvbLVlVrcBUaBpanSvFXy5Iay3kG9ArxWiFQxKflJopyLP8Gmm4Yvbsw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2877fc6--62dc--51ad--b157--4c09a4f274b5-osd--block--d2877fc6--62dc--51ad--b157--4c09a4f274b5', 'dm-uuid-LVM-7LVTsKXd7HIvwLOlwIgvnvIMJ54t1cPOgYpDah7ONpCBRQytZ43PyRVPBl38dgW5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:57:20.115453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a154e298--15cb--5d50--9a1c--17bc1371db7e-osd--block--a154e298--15cb--5d50--9a1c--17bc1371db7e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P3bfw4-K4Nt-fYqq-420J-zfth-TTxR-E3QAou', 'scsi-0QEMU_QEMU_HARDDISK_5c63af0b-1be6-4a9c-8f35-a4445080f1db', 'scsi-SQEMU_QEMU_HARDDISK_5c63af0b-1be6-4a9c-8f35-a4445080f1db'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:57:20.115460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--56010334--63d7--5603--a2fe--432c47d6dcb8-osd--block--56010334--63d7--5603--a2fe--432c47d6dcb8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3tif6z-fkDa-jtZO-rKIs-mW9z-sNzV-G9Mbhe', 'scsi-0QEMU_QEMU_HARDDISK_da9e83cb-2e5e-4388-ad73-1879a24665a3', 'scsi-SQEMU_QEMU_HARDDISK_da9e83cb-2e5e-4388-ad73-1879a24665a3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:57:20.115477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46481bd5-1fc4-4619-9f81-82a2d5c944be', 'scsi-SQEMU_QEMU_HARDDISK_46481bd5-1fc4-4619-9f81-82a2d5c944be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:57:20.115497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:57:20.115510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115516 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.115526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-16 00:57:20.115561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part1', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part14', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part15', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part16', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:57:20.115571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--457b984f--2001--5589--9984--9a697803acd2-osd--block--457b984f--2001--5589--9984--9a697803acd2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mgW1BQ-nXoc-c1V8-OokY-Wdcb-Y2DR-cYNYFs', 'scsi-0QEMU_QEMU_HARDDISK_a99d92e2-a7d0-4115-a3b5-db7bfa0170a9', 'scsi-SQEMU_QEMU_HARDDISK_a99d92e2-a7d0-4115-a3b5-db7bfa0170a9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:57:20.115578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d2877fc6--62dc--51ad--b157--4c09a4f274b5-osd--block--d2877fc6--62dc--51ad--b157--4c09a4f274b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TlF0kY-AYW1-VWgt-gIFx-fbP3-MThZ-R9X0sN', 'scsi-0QEMU_QEMU_HARDDISK_f8c86b93-6440-4cc6-ba3c-00ae05f2a443', 'scsi-SQEMU_QEMU_HARDDISK_f8c86b93-6440-4cc6-ba3c-00ae05f2a443'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:57:20.115589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad9de541-7002-4a51-9253-a212a9f46ca2', 'scsi-SQEMU_QEMU_HARDDISK_ad9de541-7002-4a51-9253-a212a9f46ca2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:57:20.115599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-16 00:57:20.115605 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.115612 | orchestrator | 2025-09-16 00:57:20.115618 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-16 00:57:20.115624 | orchestrator | Tuesday 16 September 2025 00:55:27 +0000 (0:00:00.508) 0:00:15.393 ***** 2025-09-16 00:57:20.115631 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8832b43a--4370--5f7f--b8ca--e1ef860202d6-osd--block--8832b43a--4370--5f7f--b8ca--e1ef860202d6', 'dm-uuid-LVM-YcgXbQSLW6T92S6r08xR6FKW11TasuSzM1boHuyKTVrqUfc58vek5nzVrYSc131l'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115638 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b409e677--b998--57d2--be40--43b65c9fb72d-osd--block--b409e677--b998--57d2--be40--43b65c9fb72d', 'dm-uuid-LVM-gbuhjycd69TX34wcCVoOmjlpPQ8wKcDDF2HTNd5TzUHszt21u4Oo8BSW9v29cmec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115648 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115655 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115666 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115676 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115683 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115689 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115699 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a154e298--15cb--5d50--9a1c--17bc1371db7e-osd--block--a154e298--15cb--5d50--9a1c--17bc1371db7e', 'dm-uuid-LVM-EFbsuN8afaIRpM6v16JYlvMAjTlWagjCgfIoPoiTnRbMaFJFK1uNEn8SJIoQO836'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115717 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56010334--63d7--5603--a2fe--432c47d6dcb8-osd--block--56010334--63d7--5603--a2fe--432c47d6dcb8', 'dm-uuid-LVM-ucs6vcOg2JldR43Dv3HJMWOQXgxk4Rjo7I7oTzMQaB9pBSV82lBHT1wVuGjA34S1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115728 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115735 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115746 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d0f5288-98d8-49aa-a26a-aae2304ebcdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115758 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115769 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8832b43a--4370--5f7f--b8ca--e1ef860202d6-osd--block--8832b43a--4370--5f7f--b8ca--e1ef860202d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WHoXZD-QS5F-ff1Z-a1ef-ziTD-c1GW-4CB7Fq', 'scsi-0QEMU_QEMU_HARDDISK_216f9756-46fe-48b3-8a57-6cc5b7e0c275', 'scsi-SQEMU_QEMU_HARDDISK_216f9756-46fe-48b3-8a57-6cc5b7e0c275'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115777 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115787 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b409e677--b998--57d2--be40--43b65c9fb72d-osd--block--b409e677--b998--57d2--be40--43b65c9fb72d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3K9cLz-vSxn-jDgK-Vo6h-4rad-CN2R-XamGFe', 'scsi-0QEMU_QEMU_HARDDISK_ebe7fd99-ddf0-4119-8dea-cb8b427f2aed', 'scsi-SQEMU_QEMU_HARDDISK_ebe7fd99-ddf0-4119-8dea-cb8b427f2aed'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115794 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115806 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b7c66eb-e150-40bb-863f-cd4924cbb0ab', 'scsi-SQEMU_QEMU_HARDDISK_6b7c66eb-e150-40bb-863f-cd4924cbb0ab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115836 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115843 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115850 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115856 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.115866 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115877 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115888 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a5fa4db-3c7f-47bc-ae9f-3cfe1b94e370-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115895 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a154e298--15cb--5d50--9a1c--17bc1371db7e-osd--block--a154e298--15cb--5d50--9a1c--17bc1371db7e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P3bfw4-K4Nt-fYqq-420J-zfth-TTxR-E3QAou', 'scsi-0QEMU_QEMU_HARDDISK_5c63af0b-1be6-4a9c-8f35-a4445080f1db', 'scsi-SQEMU_QEMU_HARDDISK_5c63af0b-1be6-4a9c-8f35-a4445080f1db'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115905 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--56010334--63d7--5603--a2fe--432c47d6dcb8-osd--block--56010334--63d7--5603--a2fe--432c47d6dcb8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3tif6z-fkDa-jtZO-rKIs-mW9z-sNzV-G9Mbhe', 'scsi-0QEMU_QEMU_HARDDISK_da9e83cb-2e5e-4388-ad73-1879a24665a3', 'scsi-SQEMU_QEMU_HARDDISK_da9e83cb-2e5e-4388-ad73-1879a24665a3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115916 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--457b984f--2001--5589--9984--9a697803acd2-osd--block--457b984f--2001--5589--9984--9a697803acd2', 'dm-uuid-LVM-tzkPODnltvbLVlVrcBUaBpanSvFXy5Iay3kG9ArxWiFQxKflJopyLP8Gmm4Yvbsw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115927 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46481bd5-1fc4-4619-9f81-82a2d5c944be', 'scsi-SQEMU_QEMU_HARDDISK_46481bd5-1fc4-4619-9f81-82a2d5c944be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115934 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2877fc6--62dc--51ad--b157--4c09a4f274b5-osd--block--d2877fc6--62dc--51ad--b157--4c09a4f274b5', 'dm-uuid-LVM-7LVTsKXd7HIvwLOlwIgvnvIMJ54t1cPOgYpDah7ONpCBRQytZ43PyRVPBl38dgW5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115940 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115950 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.115961 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115968 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115974 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115985 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115991 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.115998 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.116007 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.116019 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.116029 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part1', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part14', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part15', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part16', 'scsi-SQEMU_QEMU_HARDDISK_527c098b-264c-4a31-af0b-91dc94de5595-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.116036 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--457b984f--2001--5589--9984--9a697803acd2-osd--block--457b984f--2001--5589--9984--9a697803acd2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mgW1BQ-nXoc-c1V8-OokY-Wdcb-Y2DR-cYNYFs', 'scsi-0QEMU_QEMU_HARDDISK_a99d92e2-a7d0-4115-a3b5-db7bfa0170a9', 'scsi-SQEMU_QEMU_HARDDISK_a99d92e2-a7d0-4115-a3b5-db7bfa0170a9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.116051 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d2877fc6--62dc--51ad--b157--4c09a4f274b5-osd--block--d2877fc6--62dc--51ad--b157--4c09a4f274b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TlF0kY-AYW1-VWgt-gIFx-fbP3-MThZ-R9X0sN', 'scsi-0QEMU_QEMU_HARDDISK_f8c86b93-6440-4cc6-ba3c-00ae05f2a443', 'scsi-SQEMU_QEMU_HARDDISK_f8c86b93-6440-4cc6-ba3c-00ae05f2a443'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.116058 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad9de541-7002-4a51-9253-a212a9f46ca2', 'scsi-SQEMU_QEMU_HARDDISK_ad9de541-7002-4a51-9253-a212a9f46ca2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.116069 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-16-00-02-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-16 00:57:20.116075 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.116082 | orchestrator | 2025-09-16 00:57:20.116088 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-16 00:57:20.116094 | orchestrator | Tuesday 16 September 2025 00:55:27 +0000 (0:00:00.566) 0:00:15.959 ***** 2025-09-16 00:57:20.116101 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:57:20.116107 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:57:20.116113 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:57:20.116119 | orchestrator | 2025-09-16 00:57:20.116126 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-16 00:57:20.116132 | orchestrator | Tuesday 16 September 2025 00:55:28 +0000 (0:00:00.664) 0:00:16.623 ***** 2025-09-16 00:57:20.116138 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:57:20.116144 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:57:20.116150 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:57:20.116156 | orchestrator | 2025-09-16 00:57:20.116163 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-16 00:57:20.116169 | orchestrator | Tuesday 16 September 2025 00:55:28 +0000 (0:00:00.440) 0:00:17.063 ***** 2025-09-16 00:57:20.116179 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:57:20.116185 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:57:20.116192 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:57:20.116198 | orchestrator | 2025-09-16 00:57:20.116204 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-16 00:57:20.116210 | orchestrator | Tuesday 16 September 2025 00:55:30 +0000 (0:00:01.551) 0:00:18.615 ***** 2025-09-16 00:57:20.116216 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.116222 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.116228 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.116235 | orchestrator | 2025-09-16 00:57:20.116241 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-16 00:57:20.116247 | orchestrator | Tuesday 16 September 2025 00:55:30 +0000 (0:00:00.268) 0:00:18.884 ***** 2025-09-16 00:57:20.116253 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.116259 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.116265 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.116271 | orchestrator | 2025-09-16 00:57:20.116277 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-16 00:57:20.116284 | orchestrator | Tuesday 16 September 2025 00:55:31 +0000 (0:00:00.416) 0:00:19.300 ***** 2025-09-16 00:57:20.116290 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.116296 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.116302 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.116308 | orchestrator | 2025-09-16 00:57:20.116318 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-16 00:57:20.116324 | orchestrator | Tuesday 16 September 2025 00:55:31 +0000 (0:00:00.478) 0:00:19.779 ***** 2025-09-16 00:57:20.116331 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-16 00:57:20.116337 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-16 00:57:20.116343 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-16 00:57:20.116350 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-16 00:57:20.116356 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-16 00:57:20.116362 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-16 00:57:20.116368 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-16 00:57:20.116374 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-16 00:57:20.116380 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-16 00:57:20.116386 | orchestrator | 2025-09-16 00:57:20.116393 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-16 00:57:20.116399 | orchestrator | Tuesday 16 September 2025 00:55:32 +0000 (0:00:00.926) 0:00:20.705 ***** 2025-09-16 00:57:20.116405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-16 00:57:20.116411 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-16 00:57:20.116417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-16 00:57:20.116423 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.116429 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-16 00:57:20.116435 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-16 00:57:20.116441 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-16 00:57:20.116447 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.116454 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-16 00:57:20.116460 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-16 00:57:20.116466 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-16 00:57:20.116472 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.116478 | orchestrator | 2025-09-16 00:57:20.116484 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-16 00:57:20.116490 | orchestrator | Tuesday 16 September 2025 00:55:32 +0000 (0:00:00.321) 0:00:21.026 ***** 2025-09-16 00:57:20.116502 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 00:57:20.116508 | orchestrator | 2025-09-16 00:57:20.116514 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-16 00:57:20.116521 | orchestrator | Tuesday 16 September 2025 00:55:33 +0000 (0:00:00.660) 0:00:21.687 ***** 2025-09-16 00:57:20.116528 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.116534 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.116540 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.116546 | orchestrator | 2025-09-16 00:57:20.116555 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-16 00:57:20.116562 | orchestrator | Tuesday 16 September 2025 00:55:33 +0000 (0:00:00.314) 0:00:22.001 ***** 2025-09-16 00:57:20.116568 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.116574 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.116580 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.116586 | orchestrator | 2025-09-16 00:57:20.116592 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-16 00:57:20.116598 | orchestrator | Tuesday 16 September 2025 00:55:34 +0000 (0:00:00.278) 0:00:22.280 ***** 2025-09-16 00:57:20.116605 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.116611 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.116617 | orchestrator | skipping: [testbed-node-5] 2025-09-16 00:57:20.116623 | orchestrator | 2025-09-16 00:57:20.116629 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-16 00:57:20.116635 | orchestrator | Tuesday 16 September 2025 00:55:34 +0000 (0:00:00.287) 0:00:22.567 ***** 2025-09-16 00:57:20.116642 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:57:20.116648 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:57:20.116654 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:57:20.116660 | orchestrator | 2025-09-16 00:57:20.116666 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-16 00:57:20.116672 | orchestrator | Tuesday 16 September 2025 00:55:34 +0000 (0:00:00.537) 0:00:23.105 ***** 2025-09-16 00:57:20.116678 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:57:20.116684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:57:20.116691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:57:20.116697 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.116703 | orchestrator | 2025-09-16 00:57:20.116709 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-16 00:57:20.116715 | orchestrator | Tuesday 16 September 2025 00:55:35 +0000 (0:00:00.355) 0:00:23.461 ***** 2025-09-16 00:57:20.116721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:57:20.116727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:57:20.116733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:57:20.116739 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.116746 | orchestrator | 2025-09-16 00:57:20.116752 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-16 00:57:20.116758 | orchestrator | Tuesday 16 September 2025 00:55:35 +0000 (0:00:00.363) 0:00:23.825 ***** 2025-09-16 00:57:20.116764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-16 00:57:20.116770 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-16 00:57:20.116776 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-16 00:57:20.116785 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.116792 | orchestrator | 2025-09-16 00:57:20.116798 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-16 00:57:20.116804 | orchestrator | Tuesday 16 September 2025 00:55:35 +0000 (0:00:00.364) 0:00:24.189 ***** 2025-09-16 00:57:20.116858 | orchestrator | ok: [testbed-node-3] 2025-09-16 00:57:20.116865 | orchestrator | ok: [testbed-node-4] 2025-09-16 00:57:20.116871 | orchestrator | ok: [testbed-node-5] 2025-09-16 00:57:20.116878 | orchestrator | 2025-09-16 00:57:20.116884 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-16 00:57:20.116890 | orchestrator | Tuesday 16 September 2025 00:55:36 +0000 (0:00:00.299) 0:00:24.488 ***** 2025-09-16 00:57:20.116896 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-16 00:57:20.116903 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-16 00:57:20.116909 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-16 00:57:20.116915 | orchestrator | 2025-09-16 00:57:20.116922 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-16 00:57:20.116928 | orchestrator | Tuesday 16 September 2025 00:55:36 +0000 (0:00:00.492) 0:00:24.981 ***** 2025-09-16 00:57:20.116934 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-16 00:57:20.116940 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-16 00:57:20.116946 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-16 00:57:20.116952 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-16 00:57:20.116959 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-16 00:57:20.116965 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-16 00:57:20.116971 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-16 00:57:20.116977 | orchestrator | 2025-09-16 00:57:20.116983 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-16 00:57:20.116990 | orchestrator | Tuesday 16 September 2025 00:55:37 +0000 (0:00:00.924) 0:00:25.906 ***** 2025-09-16 00:57:20.116996 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-16 00:57:20.117002 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-16 00:57:20.117008 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-16 00:57:20.117014 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-16 00:57:20.117020 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-16 00:57:20.117027 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-16 00:57:20.117033 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-16 00:57:20.117039 | orchestrator | 2025-09-16 00:57:20.117048 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-16 00:57:20.117054 | orchestrator | Tuesday 16 September 2025 00:55:39 +0000 (0:00:01.825) 0:00:27.731 ***** 2025-09-16 00:57:20.117061 | orchestrator | skipping: [testbed-node-3] 2025-09-16 00:57:20.117067 | orchestrator | skipping: [testbed-node-4] 2025-09-16 00:57:20.117073 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-16 00:57:20.117079 | orchestrator | 2025-09-16 00:57:20.117085 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-16 00:57:20.117091 | orchestrator | Tuesday 16 September 2025 00:55:39 +0000 (0:00:00.342) 0:00:28.073 ***** 2025-09-16 00:57:20.117098 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-16 00:57:20.117105 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-16 00:57:20.117116 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-16 00:57:20.117122 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-16 00:57:20.117129 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-16 00:57:20.117135 | orchestrator | 2025-09-16 00:57:20.117144 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-16 00:57:20.117151 | orchestrator | Tuesday 16 September 2025 00:56:25 +0000 (0:00:45.335) 0:01:13.409 ***** 2025-09-16 00:57:20.117157 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117163 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117169 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117175 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117182 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117188 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117194 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-16 00:57:20.117200 | orchestrator | 2025-09-16 00:57:20.117206 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-16 00:57:20.117212 | orchestrator | Tuesday 16 September 2025 00:56:48 +0000 (0:00:23.362) 0:01:36.771 ***** 2025-09-16 00:57:20.117218 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117224 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117230 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117237 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117243 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117249 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117255 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-16 00:57:20.117261 | orchestrator | 2025-09-16 00:57:20.117267 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-16 00:57:20.117273 | orchestrator | Tuesday 16 September 2025 00:57:00 +0000 (0:00:11.827) 0:01:48.599 ***** 2025-09-16 00:57:20.117280 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117286 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-16 00:57:20.117292 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-16 00:57:20.117298 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117304 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-16 00:57:20.117310 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-16 00:57:20.117320 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117330 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-16 00:57:20.117337 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-16 00:57:20.117343 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117349 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-16 00:57:20.117355 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-16 00:57:20.117361 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117367 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-16 00:57:20.117373 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-16 00:57:20.117379 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-16 00:57:20.117385 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-16 00:57:20.117391 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-16 00:57:20.117398 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-16 00:57:20.117404 | orchestrator | 2025-09-16 00:57:20.117410 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:57:20.117417 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-16 00:57:20.117424 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-16 00:57:20.117430 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-16 00:57:20.117437 | orchestrator | 2025-09-16 00:57:20.117443 | orchestrator | 2025-09-16 00:57:20.117449 | orchestrator | 2025-09-16 00:57:20.117455 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:57:20.117461 | orchestrator | Tuesday 16 September 2025 00:57:17 +0000 (0:00:17.627) 0:02:06.227 ***** 2025-09-16 00:57:20.117468 | orchestrator | =============================================================================== 2025-09-16 00:57:20.117474 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.34s 2025-09-16 00:57:20.117484 | orchestrator | generate keys ---------------------------------------------------------- 23.36s 2025-09-16 00:57:20.117490 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.63s 2025-09-16 00:57:20.117497 | orchestrator | get keys from monitors ------------------------------------------------- 11.83s 2025-09-16 00:57:20.117503 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.03s 2025-09-16 00:57:20.117509 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.83s 2025-09-16 00:57:20.117515 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.73s 2025-09-16 00:57:20.117521 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 1.55s 2025-09-16 00:57:20.117528 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.93s 2025-09-16 00:57:20.117534 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.92s 2025-09-16 00:57:20.117540 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.75s 2025-09-16 00:57:20.117546 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.66s 2025-09-16 00:57:20.117552 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.66s 2025-09-16 00:57:20.117558 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.66s 2025-09-16 00:57:20.117564 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.66s 2025-09-16 00:57:20.117574 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.64s 2025-09-16 00:57:20.117580 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.57s 2025-09-16 00:57:20.117587 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.55s 2025-09-16 00:57:20.117593 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.54s 2025-09-16 00:57:20.117599 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.51s 2025-09-16 00:57:20.122630 | orchestrator | 2025-09-16 00:57:20 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:20.124222 | orchestrator | 2025-09-16 00:57:20 | INFO  | Task 3d5e229a-5906-40e3-a379-4b0e6b686fff is in state STARTED 2025-09-16 00:57:20.125629 | orchestrator | 2025-09-16 00:57:20 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:20.125863 | orchestrator | 2025-09-16 00:57:20 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:23.175103 | orchestrator | 2025-09-16 00:57:23 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:23.177207 | orchestrator | 2025-09-16 00:57:23 | INFO  | Task 3d5e229a-5906-40e3-a379-4b0e6b686fff is in state STARTED 2025-09-16 00:57:23.179382 | orchestrator | 2025-09-16 00:57:23 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:23.179435 | orchestrator | 2025-09-16 00:57:23 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:26.224880 | orchestrator | 2025-09-16 00:57:26 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:26.226109 | orchestrator | 2025-09-16 00:57:26 | INFO  | Task 3d5e229a-5906-40e3-a379-4b0e6b686fff is in state STARTED 2025-09-16 00:57:26.227922 | orchestrator | 2025-09-16 00:57:26 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:26.228322 | orchestrator | 2025-09-16 00:57:26 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:29.275464 | orchestrator | 2025-09-16 00:57:29 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:29.276107 | orchestrator | 2025-09-16 00:57:29 | INFO  | Task 3d5e229a-5906-40e3-a379-4b0e6b686fff is in state STARTED 2025-09-16 00:57:29.278075 | orchestrator | 2025-09-16 00:57:29 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:29.278469 | orchestrator | 2025-09-16 00:57:29 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:32.333153 | orchestrator | 2025-09-16 00:57:32 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:32.335787 | orchestrator | 2025-09-16 00:57:32 | INFO  | Task 3d5e229a-5906-40e3-a379-4b0e6b686fff is in state STARTED 2025-09-16 00:57:32.338954 | orchestrator | 2025-09-16 00:57:32 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:32.339222 | orchestrator | 2025-09-16 00:57:32 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:35.389232 | orchestrator | 2025-09-16 00:57:35 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:35.392883 | orchestrator | 2025-09-16 00:57:35 | INFO  | Task 3d5e229a-5906-40e3-a379-4b0e6b686fff is in state STARTED 2025-09-16 00:57:35.395392 | orchestrator | 2025-09-16 00:57:35 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:35.395867 | orchestrator | 2025-09-16 00:57:35 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:38.448221 | orchestrator | 2025-09-16 00:57:38 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:38.450350 | orchestrator | 2025-09-16 00:57:38 | INFO  | Task 3d5e229a-5906-40e3-a379-4b0e6b686fff is in state STARTED 2025-09-16 00:57:38.452110 | orchestrator | 2025-09-16 00:57:38 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:38.452230 | orchestrator | 2025-09-16 00:57:38 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:41.501267 | orchestrator | 2025-09-16 00:57:41 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:41.505171 | orchestrator | 2025-09-16 00:57:41 | INFO  | Task 3d5e229a-5906-40e3-a379-4b0e6b686fff is in state STARTED 2025-09-16 00:57:41.508199 | orchestrator | 2025-09-16 00:57:41 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:41.508590 | orchestrator | 2025-09-16 00:57:41 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:44.559176 | orchestrator | 2025-09-16 00:57:44 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:44.559864 | orchestrator | 2025-09-16 00:57:44 | INFO  | Task 3d5e229a-5906-40e3-a379-4b0e6b686fff is in state STARTED 2025-09-16 00:57:44.560753 | orchestrator | 2025-09-16 00:57:44 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:44.561016 | orchestrator | 2025-09-16 00:57:44 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:47.603271 | orchestrator | 2025-09-16 00:57:47 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:47.604684 | orchestrator | 2025-09-16 00:57:47 | INFO  | Task 3d5e229a-5906-40e3-a379-4b0e6b686fff is in state SUCCESS 2025-09-16 00:57:47.606532 | orchestrator | 2025-09-16 00:57:47 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:47.606761 | orchestrator | 2025-09-16 00:57:47 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:50.659423 | orchestrator | 2025-09-16 00:57:50 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:57:50.661480 | orchestrator | 2025-09-16 00:57:50 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:50.663050 | orchestrator | 2025-09-16 00:57:50 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:50.663420 | orchestrator | 2025-09-16 00:57:50 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:53.700161 | orchestrator | 2025-09-16 00:57:53 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:57:53.701327 | orchestrator | 2025-09-16 00:57:53 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:53.704474 | orchestrator | 2025-09-16 00:57:53 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:53.704502 | orchestrator | 2025-09-16 00:57:53 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:56.751121 | orchestrator | 2025-09-16 00:57:56 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:57:56.751701 | orchestrator | 2025-09-16 00:57:56 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:56.752236 | orchestrator | 2025-09-16 00:57:56 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:56.752259 | orchestrator | 2025-09-16 00:57:56 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:57:59.787283 | orchestrator | 2025-09-16 00:57:59 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:57:59.789176 | orchestrator | 2025-09-16 00:57:59 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:57:59.790557 | orchestrator | 2025-09-16 00:57:59 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:57:59.790724 | orchestrator | 2025-09-16 00:57:59 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:02.831660 | orchestrator | 2025-09-16 00:58:02 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:58:02.833219 | orchestrator | 2025-09-16 00:58:02 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:02.835337 | orchestrator | 2025-09-16 00:58:02 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:58:02.835368 | orchestrator | 2025-09-16 00:58:02 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:05.888304 | orchestrator | 2025-09-16 00:58:05 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:58:05.890146 | orchestrator | 2025-09-16 00:58:05 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:05.891234 | orchestrator | 2025-09-16 00:58:05 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state STARTED 2025-09-16 00:58:05.891474 | orchestrator | 2025-09-16 00:58:05 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:08.937420 | orchestrator | 2025-09-16 00:58:08 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:58:08.938304 | orchestrator | 2025-09-16 00:58:08 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:08.939471 | orchestrator | 2025-09-16 00:58:08 | INFO  | Task 18fe8cce-4187-4f92-9c16-db97fb8165f3 is in state SUCCESS 2025-09-16 00:58:08.941211 | orchestrator | 2025-09-16 00:58:08.941247 | orchestrator | 2025-09-16 00:58:08.941259 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-16 00:58:08.941271 | orchestrator | 2025-09-16 00:58:08.941282 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-16 00:58:08.941294 | orchestrator | Tuesday 16 September 2025 00:57:22 +0000 (0:00:00.161) 0:00:00.161 ***** 2025-09-16 00:58:08.941305 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-16 00:58:08.941318 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-16 00:58:08.941329 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-16 00:58:08.941340 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-16 00:58:08.941351 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-16 00:58:08.941362 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-16 00:58:08.941372 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-16 00:58:08.941383 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-16 00:58:08.941394 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-16 00:58:08.941405 | orchestrator | 2025-09-16 00:58:08.941416 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-16 00:58:08.941654 | orchestrator | Tuesday 16 September 2025 00:57:26 +0000 (0:00:04.232) 0:00:04.393 ***** 2025-09-16 00:58:08.941673 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-16 00:58:08.941685 | orchestrator | 2025-09-16 00:58:08.941696 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-16 00:58:08.941707 | orchestrator | Tuesday 16 September 2025 00:57:27 +0000 (0:00:00.969) 0:00:05.363 ***** 2025-09-16 00:58:08.941743 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-16 00:58:08.941755 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-16 00:58:08.941766 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-16 00:58:08.941777 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-16 00:58:08.941788 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-16 00:58:08.941799 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-16 00:58:08.941809 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-16 00:58:08.941843 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-16 00:58:08.941855 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-16 00:58:08.941866 | orchestrator | 2025-09-16 00:58:08.941876 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-16 00:58:08.941887 | orchestrator | Tuesday 16 September 2025 00:57:40 +0000 (0:00:12.803) 0:00:18.166 ***** 2025-09-16 00:58:08.941898 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-16 00:58:08.941909 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-16 00:58:08.941920 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-16 00:58:08.941930 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-16 00:58:08.941941 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-16 00:58:08.941952 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-16 00:58:08.941963 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-16 00:58:08.941973 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-16 00:58:08.941997 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-16 00:58:08.942009 | orchestrator | 2025-09-16 00:58:08.942066 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:58:08.942078 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:58:08.942091 | orchestrator | 2025-09-16 00:58:08.942102 | orchestrator | 2025-09-16 00:58:08.942113 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:58:08.942124 | orchestrator | Tuesday 16 September 2025 00:57:46 +0000 (0:00:06.361) 0:00:24.528 ***** 2025-09-16 00:58:08.942135 | orchestrator | =============================================================================== 2025-09-16 00:58:08.942145 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.80s 2025-09-16 00:58:08.942156 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.36s 2025-09-16 00:58:08.942167 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.23s 2025-09-16 00:58:08.942177 | orchestrator | Create share directory -------------------------------------------------- 0.97s 2025-09-16 00:58:08.942188 | orchestrator | 2025-09-16 00:58:08.942199 | orchestrator | 2025-09-16 00:58:08.942209 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 00:58:08.942220 | orchestrator | 2025-09-16 00:58:08.942244 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 00:58:08.942255 | orchestrator | Tuesday 16 September 2025 00:56:26 +0000 (0:00:00.255) 0:00:00.255 ***** 2025-09-16 00:58:08.942266 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:58:08.942277 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:58:08.942288 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:58:08.942298 | orchestrator | 2025-09-16 00:58:08.942318 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 00:58:08.942329 | orchestrator | Tuesday 16 September 2025 00:56:27 +0000 (0:00:00.288) 0:00:00.543 ***** 2025-09-16 00:58:08.942340 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-16 00:58:08.942351 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-16 00:58:08.942362 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-16 00:58:08.942372 | orchestrator | 2025-09-16 00:58:08.942383 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-16 00:58:08.942394 | orchestrator | 2025-09-16 00:58:08.942405 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-16 00:58:08.942415 | orchestrator | Tuesday 16 September 2025 00:56:27 +0000 (0:00:00.425) 0:00:00.968 ***** 2025-09-16 00:58:08.942426 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:58:08.942437 | orchestrator | 2025-09-16 00:58:08.942448 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-16 00:58:08.942459 | orchestrator | Tuesday 16 September 2025 00:56:27 +0000 (0:00:00.502) 0:00:01.471 ***** 2025-09-16 00:58:08.942482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-16 00:58:08.942512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-16 00:58:08.942539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-16 00:58:08.942552 | orchestrator | 2025-09-16 00:58:08.942563 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-16 00:58:08.942575 | orchestrator | Tuesday 16 September 2025 00:56:29 +0000 (0:00:01.087) 0:00:02.558 ***** 2025-09-16 00:58:08.942586 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:58:08.942596 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:58:08.942614 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:58:08.942625 | orchestrator | 2025-09-16 00:58:08.942636 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-16 00:58:08.942646 | orchestrator | Tuesday 16 September 2025 00:56:29 +0000 (0:00:00.393) 0:00:02.951 ***** 2025-09-16 00:58:08.942657 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-16 00:58:08.942668 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-16 00:58:08.942684 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-16 00:58:08.942696 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-16 00:58:08.942706 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-16 00:58:08.942717 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-16 00:58:08.942728 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-16 00:58:08.942738 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-16 00:58:08.942749 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-16 00:58:08.942760 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-16 00:58:08.942771 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-16 00:58:08.942781 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-16 00:58:08.942792 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-16 00:58:08.942803 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-16 00:58:08.942813 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-16 00:58:08.942843 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-16 00:58:08.942854 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-16 00:58:08.942865 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-16 00:58:08.942876 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-16 00:58:08.942886 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-16 00:58:08.942897 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-16 00:58:08.942908 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-16 00:58:08.942918 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-16 00:58:08.942929 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-16 00:58:08.942941 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-16 00:58:08.942954 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-16 00:58:08.942965 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-16 00:58:08.942976 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-16 00:58:08.942987 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-16 00:58:08.942998 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-16 00:58:08.943015 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-16 00:58:08.943030 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-16 00:58:08.943041 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-16 00:58:08.943053 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-16 00:58:08.943064 | orchestrator | 2025-09-16 00:58:08.943075 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-16 00:58:08.943086 | orchestrator | Tuesday 16 September 2025 00:56:30 +0000 (0:00:00.682) 0:00:03.634 ***** 2025-09-16 00:58:08.943097 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:58:08.943107 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:58:08.943118 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:58:08.943129 | orchestrator | 2025-09-16 00:58:08.943140 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-16 00:58:08.943151 | orchestrator | Tuesday 16 September 2025 00:56:30 +0000 (0:00:00.290) 0:00:03.925 ***** 2025-09-16 00:58:08.943162 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.943172 | orchestrator | 2025-09-16 00:58:08.943183 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-16 00:58:08.943199 | orchestrator | Tuesday 16 September 2025 00:56:30 +0000 (0:00:00.117) 0:00:04.042 ***** 2025-09-16 00:58:08.943210 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.943221 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:58:08.943232 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:58:08.943243 | orchestrator | 2025-09-16 00:58:08.943254 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-16 00:58:08.943264 | orchestrator | Tuesday 16 September 2025 00:56:30 +0000 (0:00:00.428) 0:00:04.471 ***** 2025-09-16 00:58:08.943275 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:58:08.943286 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:58:08.943297 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:58:08.943307 | orchestrator | 2025-09-16 00:58:08.943318 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-16 00:58:08.943329 | orchestrator | Tuesday 16 September 2025 00:56:31 +0000 (0:00:00.313) 0:00:04.784 ***** 2025-09-16 00:58:08.943339 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.943350 | orchestrator | 2025-09-16 00:58:08.943361 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-16 00:58:08.943371 | orchestrator | Tuesday 16 September 2025 00:56:31 +0000 (0:00:00.131) 0:00:04.916 ***** 2025-09-16 00:58:08.943382 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.943393 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:58:08.943403 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:58:08.943414 | orchestrator | 2025-09-16 00:58:08.943425 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-16 00:58:08.943436 | orchestrator | Tuesday 16 September 2025 00:56:31 +0000 (0:00:00.272) 0:00:05.188 ***** 2025-09-16 00:58:08.943447 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:58:08.943457 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:58:08.943468 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:58:08.943479 | orchestrator | 2025-09-16 00:58:08.943489 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-16 00:58:08.943500 | orchestrator | Tuesday 16 September 2025 00:56:31 +0000 (0:00:00.318) 0:00:05.506 ***** 2025-09-16 00:58:08.943510 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.943521 | orchestrator | 2025-09-16 00:58:08.943539 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-16 00:58:08.943550 | orchestrator | Tuesday 16 September 2025 00:56:32 +0000 (0:00:00.144) 0:00:05.651 ***** 2025-09-16 00:58:08.943561 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.943571 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:58:08.943582 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:58:08.943593 | orchestrator | 2025-09-16 00:58:08.943603 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-16 00:58:08.943614 | orchestrator | Tuesday 16 September 2025 00:56:32 +0000 (0:00:00.538) 0:00:06.189 ***** 2025-09-16 00:58:08.943625 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:58:08.943636 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:58:08.943646 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:58:08.943657 | orchestrator | 2025-09-16 00:58:08.943668 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-16 00:58:08.943679 | orchestrator | Tuesday 16 September 2025 00:56:32 +0000 (0:00:00.320) 0:00:06.510 ***** 2025-09-16 00:58:08.943689 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.943700 | orchestrator | 2025-09-16 00:58:08.943710 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-16 00:58:08.943721 | orchestrator | Tuesday 16 September 2025 00:56:33 +0000 (0:00:00.117) 0:00:06.627 ***** 2025-09-16 00:58:08.943732 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.943742 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:58:08.943753 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:58:08.943764 | orchestrator | 2025-09-16 00:58:08.943775 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-16 00:58:08.943785 | orchestrator | Tuesday 16 September 2025 00:56:33 +0000 (0:00:00.282) 0:00:06.909 ***** 2025-09-16 00:58:08.943796 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:58:08.943807 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:58:08.943818 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:58:08.943845 | orchestrator | 2025-09-16 00:58:08.943856 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-16 00:58:08.943867 | orchestrator | Tuesday 16 September 2025 00:56:33 +0000 (0:00:00.299) 0:00:07.209 ***** 2025-09-16 00:58:08.943877 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.943888 | orchestrator | 2025-09-16 00:58:08.943899 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-16 00:58:08.943910 | orchestrator | Tuesday 16 September 2025 00:56:33 +0000 (0:00:00.288) 0:00:07.498 ***** 2025-09-16 00:58:08.943921 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.943931 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:58:08.943942 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:58:08.943953 | orchestrator | 2025-09-16 00:58:08.943969 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-16 00:58:08.943980 | orchestrator | Tuesday 16 September 2025 00:56:34 +0000 (0:00:00.280) 0:00:07.778 ***** 2025-09-16 00:58:08.943991 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:58:08.944002 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:58:08.944013 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:58:08.944023 | orchestrator | 2025-09-16 00:58:08.944034 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-16 00:58:08.944045 | orchestrator | Tuesday 16 September 2025 00:56:34 +0000 (0:00:00.278) 0:00:08.056 ***** 2025-09-16 00:58:08.944056 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.944067 | orchestrator | 2025-09-16 00:58:08.944077 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-16 00:58:08.944088 | orchestrator | Tuesday 16 September 2025 00:56:34 +0000 (0:00:00.123) 0:00:08.179 ***** 2025-09-16 00:58:08.944099 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.944110 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:58:08.944121 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:58:08.944131 | orchestrator | 2025-09-16 00:58:08.944143 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-16 00:58:08.944160 | orchestrator | Tuesday 16 September 2025 00:56:34 +0000 (0:00:00.295) 0:00:08.475 ***** 2025-09-16 00:58:08.944171 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:58:08.944181 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:58:08.944192 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:58:08.944203 | orchestrator | 2025-09-16 00:58:08.944219 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-16 00:58:08.944230 | orchestrator | Tuesday 16 September 2025 00:56:35 +0000 (0:00:00.486) 0:00:08.961 ***** 2025-09-16 00:58:08.944241 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.944252 | orchestrator | 2025-09-16 00:58:08.944263 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-16 00:58:08.944274 | orchestrator | Tuesday 16 September 2025 00:56:35 +0000 (0:00:00.131) 0:00:09.093 ***** 2025-09-16 00:58:08.944285 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.944296 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:58:08.944307 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:58:08.944318 | orchestrator | 2025-09-16 00:58:08.944329 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-16 00:58:08.944340 | orchestrator | Tuesday 16 September 2025 00:56:35 +0000 (0:00:00.271) 0:00:09.365 ***** 2025-09-16 00:58:08.944350 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:58:08.944361 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:58:08.944372 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:58:08.944383 | orchestrator | 2025-09-16 00:58:08.944394 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-16 00:58:08.944405 | orchestrator | Tuesday 16 September 2025 00:56:36 +0000 (0:00:00.299) 0:00:09.665 ***** 2025-09-16 00:58:08.944416 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.944427 | orchestrator | 2025-09-16 00:58:08.944438 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-16 00:58:08.944449 | orchestrator | Tuesday 16 September 2025 00:56:36 +0000 (0:00:00.113) 0:00:09.779 ***** 2025-09-16 00:58:08.944459 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.944470 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:58:08.944481 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:58:08.944492 | orchestrator | 2025-09-16 00:58:08.944503 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-16 00:58:08.944514 | orchestrator | Tuesday 16 September 2025 00:56:36 +0000 (0:00:00.265) 0:00:10.044 ***** 2025-09-16 00:58:08.944525 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:58:08.944536 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:58:08.944547 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:58:08.944557 | orchestrator | 2025-09-16 00:58:08.944568 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-16 00:58:08.944579 | orchestrator | Tuesday 16 September 2025 00:56:36 +0000 (0:00:00.468) 0:00:10.513 ***** 2025-09-16 00:58:08.944590 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.944601 | orchestrator | 2025-09-16 00:58:08.944612 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-16 00:58:08.944623 | orchestrator | Tuesday 16 September 2025 00:56:37 +0000 (0:00:00.134) 0:00:10.647 ***** 2025-09-16 00:58:08.944634 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.944644 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:58:08.944655 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:58:08.944666 | orchestrator | 2025-09-16 00:58:08.944677 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-16 00:58:08.944688 | orchestrator | Tuesday 16 September 2025 00:56:37 +0000 (0:00:00.282) 0:00:10.930 ***** 2025-09-16 00:58:08.944699 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:58:08.944710 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:58:08.944721 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:58:08.944732 | orchestrator | 2025-09-16 00:58:08.944743 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-16 00:58:08.944760 | orchestrator | Tuesday 16 September 2025 00:56:37 +0000 (0:00:00.283) 0:00:11.213 ***** 2025-09-16 00:58:08.944771 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.944781 | orchestrator | 2025-09-16 00:58:08.944792 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-16 00:58:08.944803 | orchestrator | Tuesday 16 September 2025 00:56:37 +0000 (0:00:00.133) 0:00:11.347 ***** 2025-09-16 00:58:08.944813 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.944853 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:58:08.944864 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:58:08.944875 | orchestrator | 2025-09-16 00:58:08.944886 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-16 00:58:08.944897 | orchestrator | Tuesday 16 September 2025 00:56:38 +0000 (0:00:00.484) 0:00:11.831 ***** 2025-09-16 00:58:08.944907 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:58:08.944918 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:58:08.944929 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:58:08.944940 | orchestrator | 2025-09-16 00:58:08.944951 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-16 00:58:08.944962 | orchestrator | Tuesday 16 September 2025 00:56:39 +0000 (0:00:01.572) 0:00:13.404 ***** 2025-09-16 00:58:08.944985 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-16 00:58:08.944996 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-16 00:58:08.945007 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-16 00:58:08.945018 | orchestrator | 2025-09-16 00:58:08.945029 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-16 00:58:08.945039 | orchestrator | Tuesday 16 September 2025 00:56:41 +0000 (0:00:01.580) 0:00:14.984 ***** 2025-09-16 00:58:08.945050 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-16 00:58:08.945061 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-16 00:58:08.945072 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-16 00:58:08.945083 | orchestrator | 2025-09-16 00:58:08.945093 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-16 00:58:08.945104 | orchestrator | Tuesday 16 September 2025 00:56:43 +0000 (0:00:02.169) 0:00:17.154 ***** 2025-09-16 00:58:08.945120 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-16 00:58:08.945131 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-16 00:58:08.945142 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-16 00:58:08.945153 | orchestrator | 2025-09-16 00:58:08.945164 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-16 00:58:08.945175 | orchestrator | Tuesday 16 September 2025 00:56:45 +0000 (0:00:02.116) 0:00:19.270 ***** 2025-09-16 00:58:08.945185 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.945196 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:58:08.945207 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:58:08.945218 | orchestrator | 2025-09-16 00:58:08.945229 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-16 00:58:08.945240 | orchestrator | Tuesday 16 September 2025 00:56:46 +0000 (0:00:00.294) 0:00:19.565 ***** 2025-09-16 00:58:08.945251 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.945262 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:58:08.945273 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:58:08.945284 | orchestrator | 2025-09-16 00:58:08.945294 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-16 00:58:08.945305 | orchestrator | Tuesday 16 September 2025 00:56:46 +0000 (0:00:00.291) 0:00:19.856 ***** 2025-09-16 00:58:08.945323 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:58:08.945334 | orchestrator | 2025-09-16 00:58:08.945344 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-16 00:58:08.945355 | orchestrator | Tuesday 16 September 2025 00:56:46 +0000 (0:00:00.562) 0:00:20.419 ***** 2025-09-16 00:58:08.945374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-16 00:58:08.945395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-16 00:58:08.945421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-16 00:58:08.945433 | orchestrator | 2025-09-16 00:58:08.945445 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-16 00:58:08.945455 | orchestrator | Tuesday 16 September 2025 00:56:48 +0000 (0:00:01.666) 0:00:22.085 ***** 2025-09-16 00:58:08.945475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-16 00:58:08.945494 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.945513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-16 00:58:08.945530 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:58:08.945542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-16 00:58:08.945560 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:58:08.945571 | orchestrator | 2025-09-16 00:58:08.945582 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-16 00:58:08.945593 | orchestrator | Tuesday 16 September 2025 00:56:49 +0000 (0:00:00.595) 0:00:22.681 ***** 2025-09-16 00:58:08.945617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-16 00:58:08.945636 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.945648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-16 00:58:08.945660 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:58:08.945684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-16 00:58:08.945703 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:58:08.945714 | orchestrator | 2025-09-16 00:58:08.945725 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-16 00:58:08.945736 | orchestrator | Tuesday 16 September 2025 00:56:49 +0000 (0:00:00.806) 0:00:23.488 ***** 2025-09-16 00:58:08.945747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-16 00:58:08.945772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-16 00:58:08.945797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-16 00:58:08.945810 | orchestrator | 2025-09-16 00:58:08.945844 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-16 00:58:08.945855 | orchestrator | Tuesday 16 September 2025 00:56:51 +0000 (0:00:01.630) 0:00:25.119 ***** 2025-09-16 00:58:08.945866 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:58:08.945877 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:58:08.945888 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:58:08.945899 | orchestrator | 2025-09-16 00:58:08.945910 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-16 00:58:08.945921 | orchestrator | Tuesday 16 September 2025 00:56:51 +0000 (0:00:00.286) 0:00:25.405 ***** 2025-09-16 00:58:08.945931 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:58:08.945953 | orchestrator | 2025-09-16 00:58:08.945964 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-16 00:58:08.945975 | orchestrator | Tuesday 16 September 2025 00:56:52 +0000 (0:00:00.509) 0:00:25.915 ***** 2025-09-16 00:58:08.945985 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:58:08.945996 | orchestrator | 2025-09-16 00:58:08.946013 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-16 00:58:08.946054 | orchestrator | Tuesday 16 September 2025 00:56:54 +0000 (0:00:02.281) 0:00:28.196 ***** 2025-09-16 00:58:08.946065 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:58:08.946076 | orchestrator | 2025-09-16 00:58:08.946087 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-16 00:58:08.946098 | orchestrator | Tuesday 16 September 2025 00:56:57 +0000 (0:00:02.629) 0:00:30.826 ***** 2025-09-16 00:58:08.946109 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:58:08.946119 | orchestrator | 2025-09-16 00:58:08.946130 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-16 00:58:08.946141 | orchestrator | Tuesday 16 September 2025 00:57:12 +0000 (0:00:14.975) 0:00:45.802 ***** 2025-09-16 00:58:08.946152 | orchestrator | 2025-09-16 00:58:08.946163 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-16 00:58:08.946174 | orchestrator | Tuesday 16 September 2025 00:57:12 +0000 (0:00:00.065) 0:00:45.867 ***** 2025-09-16 00:58:08.946185 | orchestrator | 2025-09-16 00:58:08.946195 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-16 00:58:08.946206 | orchestrator | Tuesday 16 September 2025 00:57:12 +0000 (0:00:00.062) 0:00:45.930 ***** 2025-09-16 00:58:08.946217 | orchestrator | 2025-09-16 00:58:08.946228 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-16 00:58:08.946239 | orchestrator | Tuesday 16 September 2025 00:57:12 +0000 (0:00:00.068) 0:00:45.998 ***** 2025-09-16 00:58:08.946250 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:58:08.946261 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:58:08.946272 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:58:08.946283 | orchestrator | 2025-09-16 00:58:08.946294 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:58:08.946305 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-16 00:58:08.946316 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-16 00:58:08.946327 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-16 00:58:08.946338 | orchestrator | 2025-09-16 00:58:08.946349 | orchestrator | 2025-09-16 00:58:08.946360 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:58:08.946371 | orchestrator | Tuesday 16 September 2025 00:58:08 +0000 (0:00:56.108) 0:01:42.107 ***** 2025-09-16 00:58:08.946382 | orchestrator | =============================================================================== 2025-09-16 00:58:08.946393 | orchestrator | horizon : Restart horizon container ------------------------------------ 56.11s 2025-09-16 00:58:08.946403 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.98s 2025-09-16 00:58:08.946414 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.63s 2025-09-16 00:58:08.946425 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.28s 2025-09-16 00:58:08.946436 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.17s 2025-09-16 00:58:08.946447 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.12s 2025-09-16 00:58:08.946457 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.67s 2025-09-16 00:58:08.946468 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.63s 2025-09-16 00:58:08.946486 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.58s 2025-09-16 00:58:08.946497 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.57s 2025-09-16 00:58:08.946508 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.09s 2025-09-16 00:58:08.946518 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.81s 2025-09-16 00:58:08.946529 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.68s 2025-09-16 00:58:08.946540 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.60s 2025-09-16 00:58:08.946550 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2025-09-16 00:58:08.946566 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.54s 2025-09-16 00:58:08.946576 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2025-09-16 00:58:08.946587 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.50s 2025-09-16 00:58:08.946598 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2025-09-16 00:58:08.946608 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.48s 2025-09-16 00:58:08.946619 | orchestrator | 2025-09-16 00:58:08 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:11.983112 | orchestrator | 2025-09-16 00:58:11 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:58:11.985278 | orchestrator | 2025-09-16 00:58:11 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:11.985312 | orchestrator | 2025-09-16 00:58:11 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:15.026401 | orchestrator | 2025-09-16 00:58:15 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:58:15.027291 | orchestrator | 2025-09-16 00:58:15 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:15.027710 | orchestrator | 2025-09-16 00:58:15 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:18.072529 | orchestrator | 2025-09-16 00:58:18 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:58:18.074166 | orchestrator | 2025-09-16 00:58:18 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:18.074206 | orchestrator | 2025-09-16 00:58:18 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:21.124965 | orchestrator | 2025-09-16 00:58:21 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:58:21.126305 | orchestrator | 2025-09-16 00:58:21 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:21.126922 | orchestrator | 2025-09-16 00:58:21 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:24.170612 | orchestrator | 2025-09-16 00:58:24 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:58:24.172484 | orchestrator | 2025-09-16 00:58:24 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:24.172516 | orchestrator | 2025-09-16 00:58:24 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:27.211043 | orchestrator | 2025-09-16 00:58:27 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:58:27.212632 | orchestrator | 2025-09-16 00:58:27 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:27.212671 | orchestrator | 2025-09-16 00:58:27 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:30.256778 | orchestrator | 2025-09-16 00:58:30 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:58:30.257571 | orchestrator | 2025-09-16 00:58:30 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:30.257604 | orchestrator | 2025-09-16 00:58:30 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:33.310637 | orchestrator | 2025-09-16 00:58:33 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:58:33.314283 | orchestrator | 2025-09-16 00:58:33 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:33.314388 | orchestrator | 2025-09-16 00:58:33 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:36.358179 | orchestrator | 2025-09-16 00:58:36 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:58:36.358615 | orchestrator | 2025-09-16 00:58:36 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:36.358645 | orchestrator | 2025-09-16 00:58:36 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:39.402409 | orchestrator | 2025-09-16 00:58:39 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state STARTED 2025-09-16 00:58:39.404323 | orchestrator | 2025-09-16 00:58:39 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:39.404354 | orchestrator | 2025-09-16 00:58:39 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:42.438517 | orchestrator | 2025-09-16 00:58:42 | INFO  | Task dddaf984-9cdd-4bff-94fd-84d367c2bf5d is in state SUCCESS 2025-09-16 00:58:42.439461 | orchestrator | 2025-09-16 00:58:42 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:58:42.440988 | orchestrator | 2025-09-16 00:58:42 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:42.442606 | orchestrator | 2025-09-16 00:58:42 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:58:42.443708 | orchestrator | 2025-09-16 00:58:42 | INFO  | Task 7617119b-2178-40ec-abbf-c06842a92497 is in state STARTED 2025-09-16 00:58:42.443756 | orchestrator | 2025-09-16 00:58:42 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:45.492635 | orchestrator | 2025-09-16 00:58:45 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:58:45.492737 | orchestrator | 2025-09-16 00:58:45 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:45.492752 | orchestrator | 2025-09-16 00:58:45 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:58:45.492764 | orchestrator | 2025-09-16 00:58:45 | INFO  | Task 7617119b-2178-40ec-abbf-c06842a92497 is in state SUCCESS 2025-09-16 00:58:45.492776 | orchestrator | 2025-09-16 00:58:45 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:48.522938 | orchestrator | 2025-09-16 00:58:48 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:58:48.523046 | orchestrator | 2025-09-16 00:58:48 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state STARTED 2025-09-16 00:58:48.524295 | orchestrator | 2025-09-16 00:58:48 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:58:48.526683 | orchestrator | 2025-09-16 00:58:48 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:48.527531 | orchestrator | 2025-09-16 00:58:48 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:58:48.527567 | orchestrator | 2025-09-16 00:58:48 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:51.565199 | orchestrator | 2025-09-16 00:58:51 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:58:51.565308 | orchestrator | 2025-09-16 00:58:51 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state STARTED 2025-09-16 00:58:51.566289 | orchestrator | 2025-09-16 00:58:51 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:58:51.566810 | orchestrator | 2025-09-16 00:58:51 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:51.568247 | orchestrator | 2025-09-16 00:58:51 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:58:51.568305 | orchestrator | 2025-09-16 00:58:51 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:54.610293 | orchestrator | 2025-09-16 00:58:54 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:58:54.612618 | orchestrator | 2025-09-16 00:58:54 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state STARTED 2025-09-16 00:58:54.613798 | orchestrator | 2025-09-16 00:58:54 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:58:54.616450 | orchestrator | 2025-09-16 00:58:54 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:54.618378 | orchestrator | 2025-09-16 00:58:54 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:58:54.618909 | orchestrator | 2025-09-16 00:58:54 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:58:57.655807 | orchestrator | 2025-09-16 00:58:57 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:58:57.659817 | orchestrator | 2025-09-16 00:58:57 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state STARTED 2025-09-16 00:58:57.663027 | orchestrator | 2025-09-16 00:58:57 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:58:57.664635 | orchestrator | 2025-09-16 00:58:57 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:58:57.666434 | orchestrator | 2025-09-16 00:58:57 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:58:57.667337 | orchestrator | 2025-09-16 00:58:57 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:00.703423 | orchestrator | 2025-09-16 00:59:00 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:00.704255 | orchestrator | 2025-09-16 00:59:00 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state STARTED 2025-09-16 00:59:00.704319 | orchestrator | 2025-09-16 00:59:00 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:00.705024 | orchestrator | 2025-09-16 00:59:00 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:59:00.706764 | orchestrator | 2025-09-16 00:59:00 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:00.706794 | orchestrator | 2025-09-16 00:59:00 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:03.749471 | orchestrator | 2025-09-16 00:59:03 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:03.751675 | orchestrator | 2025-09-16 00:59:03 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state STARTED 2025-09-16 00:59:03.753396 | orchestrator | 2025-09-16 00:59:03 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:03.756606 | orchestrator | 2025-09-16 00:59:03 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:59:03.757708 | orchestrator | 2025-09-16 00:59:03 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:03.757768 | orchestrator | 2025-09-16 00:59:03 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:06.796242 | orchestrator | 2025-09-16 00:59:06 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:06.797014 | orchestrator | 2025-09-16 00:59:06 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state STARTED 2025-09-16 00:59:06.797921 | orchestrator | 2025-09-16 00:59:06 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:06.799424 | orchestrator | 2025-09-16 00:59:06 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:59:06.800933 | orchestrator | 2025-09-16 00:59:06 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:06.801101 | orchestrator | 2025-09-16 00:59:06 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:09.829732 | orchestrator | 2025-09-16 00:59:09 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:09.830136 | orchestrator | 2025-09-16 00:59:09 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state STARTED 2025-09-16 00:59:09.830684 | orchestrator | 2025-09-16 00:59:09 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:09.831337 | orchestrator | 2025-09-16 00:59:09 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state STARTED 2025-09-16 00:59:09.832391 | orchestrator | 2025-09-16 00:59:09 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:09.832428 | orchestrator | 2025-09-16 00:59:09 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:12.867547 | orchestrator | 2025-09-16 00:59:12 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:12.868160 | orchestrator | 2025-09-16 00:59:12 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state STARTED 2025-09-16 00:59:12.869834 | orchestrator | 2025-09-16 00:59:12 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:12.871771 | orchestrator | 2025-09-16 00:59:12 | INFO  | Task b65f19de-2f4f-4cad-a765-2d18247ab689 is in state SUCCESS 2025-09-16 00:59:12.871854 | orchestrator | 2025-09-16 00:59:12.871899 | orchestrator | 2025-09-16 00:59:12.871912 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-16 00:59:12.871924 | orchestrator | 2025-09-16 00:59:12.871936 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-16 00:59:12.872033 | orchestrator | Tuesday 16 September 2025 00:57:50 +0000 (0:00:00.174) 0:00:00.174 ***** 2025-09-16 00:59:12.872047 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-16 00:59:12.872060 | orchestrator | 2025-09-16 00:59:12.872071 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-16 00:59:12.872082 | orchestrator | Tuesday 16 September 2025 00:57:50 +0000 (0:00:00.168) 0:00:00.342 ***** 2025-09-16 00:59:12.872093 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-16 00:59:12.872104 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-16 00:59:12.872116 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-16 00:59:12.872127 | orchestrator | 2025-09-16 00:59:12.872139 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-16 00:59:12.872150 | orchestrator | Tuesday 16 September 2025 00:57:51 +0000 (0:00:01.012) 0:00:01.355 ***** 2025-09-16 00:59:12.872161 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-16 00:59:12.872194 | orchestrator | 2025-09-16 00:59:12.872220 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-16 00:59:12.872432 | orchestrator | Tuesday 16 September 2025 00:57:52 +0000 (0:00:00.973) 0:00:02.328 ***** 2025-09-16 00:59:12.872450 | orchestrator | changed: [testbed-manager] 2025-09-16 00:59:12.872463 | orchestrator | 2025-09-16 00:59:12.872475 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-16 00:59:12.872487 | orchestrator | Tuesday 16 September 2025 00:57:53 +0000 (0:00:00.850) 0:00:03.178 ***** 2025-09-16 00:59:12.872500 | orchestrator | changed: [testbed-manager] 2025-09-16 00:59:12.872512 | orchestrator | 2025-09-16 00:59:12.872524 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-16 00:59:12.872536 | orchestrator | Tuesday 16 September 2025 00:57:54 +0000 (0:00:00.757) 0:00:03.936 ***** 2025-09-16 00:59:12.872549 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-16 00:59:12.872561 | orchestrator | ok: [testbed-manager] 2025-09-16 00:59:12.872574 | orchestrator | 2025-09-16 00:59:12.872586 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-16 00:59:12.872599 | orchestrator | Tuesday 16 September 2025 00:58:30 +0000 (0:00:35.509) 0:00:39.446 ***** 2025-09-16 00:59:12.872612 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-16 00:59:12.872625 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-16 00:59:12.872637 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-16 00:59:12.872649 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-16 00:59:12.872663 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-16 00:59:12.872675 | orchestrator | 2025-09-16 00:59:12.872688 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-16 00:59:12.872700 | orchestrator | Tuesday 16 September 2025 00:58:34 +0000 (0:00:04.098) 0:00:43.544 ***** 2025-09-16 00:59:12.872712 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-16 00:59:12.872725 | orchestrator | 2025-09-16 00:59:12.872737 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-16 00:59:12.872750 | orchestrator | Tuesday 16 September 2025 00:58:34 +0000 (0:00:00.450) 0:00:43.995 ***** 2025-09-16 00:59:12.872761 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:59:12.872771 | orchestrator | 2025-09-16 00:59:12.872782 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-16 00:59:12.872793 | orchestrator | Tuesday 16 September 2025 00:58:34 +0000 (0:00:00.124) 0:00:44.119 ***** 2025-09-16 00:59:12.872804 | orchestrator | skipping: [testbed-manager] 2025-09-16 00:59:12.872815 | orchestrator | 2025-09-16 00:59:12.872826 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-16 00:59:12.872837 | orchestrator | Tuesday 16 September 2025 00:58:35 +0000 (0:00:00.308) 0:00:44.428 ***** 2025-09-16 00:59:12.872848 | orchestrator | changed: [testbed-manager] 2025-09-16 00:59:12.873399 | orchestrator | 2025-09-16 00:59:12.873463 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-16 00:59:12.873475 | orchestrator | Tuesday 16 September 2025 00:58:36 +0000 (0:00:01.835) 0:00:46.264 ***** 2025-09-16 00:59:12.873486 | orchestrator | changed: [testbed-manager] 2025-09-16 00:59:12.873496 | orchestrator | 2025-09-16 00:59:12.873582 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-16 00:59:12.873593 | orchestrator | Tuesday 16 September 2025 00:58:37 +0000 (0:00:00.738) 0:00:47.002 ***** 2025-09-16 00:59:12.873604 | orchestrator | changed: [testbed-manager] 2025-09-16 00:59:12.873615 | orchestrator | 2025-09-16 00:59:12.873626 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-16 00:59:12.873637 | orchestrator | Tuesday 16 September 2025 00:58:38 +0000 (0:00:00.600) 0:00:47.602 ***** 2025-09-16 00:59:12.873648 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-16 00:59:12.873659 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-16 00:59:12.873683 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-16 00:59:12.873694 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-16 00:59:12.873705 | orchestrator | 2025-09-16 00:59:12.873716 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:59:12.873727 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-16 00:59:12.873739 | orchestrator | 2025-09-16 00:59:12.873750 | orchestrator | 2025-09-16 00:59:12.873797 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:59:12.873826 | orchestrator | Tuesday 16 September 2025 00:58:39 +0000 (0:00:01.446) 0:00:49.049 ***** 2025-09-16 00:59:12.873848 | orchestrator | =============================================================================== 2025-09-16 00:59:12.873859 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.51s 2025-09-16 00:59:12.873909 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.10s 2025-09-16 00:59:12.873920 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.84s 2025-09-16 00:59:12.873931 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.45s 2025-09-16 00:59:12.873942 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.01s 2025-09-16 00:59:12.873953 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 0.97s 2025-09-16 00:59:12.873964 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.85s 2025-09-16 00:59:12.873974 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.76s 2025-09-16 00:59:12.873986 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.74s 2025-09-16 00:59:12.873997 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2025-09-16 00:59:12.874008 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2025-09-16 00:59:12.874079 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2025-09-16 00:59:12.874094 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.17s 2025-09-16 00:59:12.874105 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-09-16 00:59:12.874116 | orchestrator | 2025-09-16 00:59:12.874126 | orchestrator | 2025-09-16 00:59:12.874137 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 00:59:12.874148 | orchestrator | 2025-09-16 00:59:12.874159 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 00:59:12.874170 | orchestrator | Tuesday 16 September 2025 00:58:43 +0000 (0:00:00.173) 0:00:00.173 ***** 2025-09-16 00:59:12.874181 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:59:12.874192 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:59:12.874203 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:59:12.874215 | orchestrator | 2025-09-16 00:59:12.874228 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 00:59:12.874240 | orchestrator | Tuesday 16 September 2025 00:58:43 +0000 (0:00:00.259) 0:00:00.432 ***** 2025-09-16 00:59:12.874253 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-16 00:59:12.874265 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-16 00:59:12.874277 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-16 00:59:12.874289 | orchestrator | 2025-09-16 00:59:12.874302 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-16 00:59:12.874314 | orchestrator | 2025-09-16 00:59:12.874326 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-16 00:59:12.874338 | orchestrator | Tuesday 16 September 2025 00:58:44 +0000 (0:00:00.528) 0:00:00.961 ***** 2025-09-16 00:59:12.874351 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:59:12.874363 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:59:12.874376 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:59:12.874398 | orchestrator | 2025-09-16 00:59:12.874411 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:59:12.874424 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:59:12.874437 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:59:12.874450 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 00:59:12.874463 | orchestrator | 2025-09-16 00:59:12.874474 | orchestrator | 2025-09-16 00:59:12.874485 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:59:12.874495 | orchestrator | Tuesday 16 September 2025 00:58:45 +0000 (0:00:00.783) 0:00:01.744 ***** 2025-09-16 00:59:12.874506 | orchestrator | =============================================================================== 2025-09-16 00:59:12.874517 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.78s 2025-09-16 00:59:12.874528 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-09-16 00:59:12.874538 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-09-16 00:59:12.874549 | orchestrator | 2025-09-16 00:59:12.874560 | orchestrator | 2025-09-16 00:59:12.874571 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 00:59:12.874581 | orchestrator | 2025-09-16 00:59:12.874592 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 00:59:12.874603 | orchestrator | Tuesday 16 September 2025 00:56:26 +0000 (0:00:00.256) 0:00:00.256 ***** 2025-09-16 00:59:12.874614 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:59:12.874625 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:59:12.874635 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:59:12.874646 | orchestrator | 2025-09-16 00:59:12.874657 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 00:59:12.874668 | orchestrator | Tuesday 16 September 2025 00:56:26 +0000 (0:00:00.278) 0:00:00.535 ***** 2025-09-16 00:59:12.874679 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-16 00:59:12.874690 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-16 00:59:12.874701 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-16 00:59:12.874712 | orchestrator | 2025-09-16 00:59:12.874723 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-16 00:59:12.874734 | orchestrator | 2025-09-16 00:59:12.874784 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-16 00:59:12.874797 | orchestrator | Tuesday 16 September 2025 00:56:27 +0000 (0:00:00.424) 0:00:00.959 ***** 2025-09-16 00:59:12.874808 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:59:12.874819 | orchestrator | 2025-09-16 00:59:12.874831 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-16 00:59:12.874841 | orchestrator | Tuesday 16 September 2025 00:56:27 +0000 (0:00:00.502) 0:00:01.462 ***** 2025-09-16 00:59:12.874911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.874940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.874954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.874968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-16 00:59:12.875018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-16 00:59:12.875037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-16 00:59:12.875057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.875069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.875080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.875091 | orchestrator | 2025-09-16 00:59:12.875102 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-16 00:59:12.875114 | orchestrator | Tuesday 16 September 2025 00:56:29 +0000 (0:00:01.728) 0:00:03.190 ***** 2025-09-16 00:59:12.875125 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-16 00:59:12.875136 | orchestrator | 2025-09-16 00:59:12.875146 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-16 00:59:12.875157 | orchestrator | Tuesday 16 September 2025 00:56:30 +0000 (0:00:00.806) 0:00:03.997 ***** 2025-09-16 00:59:12.875168 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:59:12.875179 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:59:12.875190 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:59:12.875201 | orchestrator | 2025-09-16 00:59:12.875212 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-16 00:59:12.875222 | orchestrator | Tuesday 16 September 2025 00:56:30 +0000 (0:00:00.464) 0:00:04.462 ***** 2025-09-16 00:59:12.875233 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-16 00:59:12.875244 | orchestrator | 2025-09-16 00:59:12.875255 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-16 00:59:12.875266 | orchestrator | Tuesday 16 September 2025 00:56:31 +0000 (0:00:00.740) 0:00:05.202 ***** 2025-09-16 00:59:12.875277 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:59:12.875288 | orchestrator | 2025-09-16 00:59:12.875304 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-16 00:59:12.875315 | orchestrator | Tuesday 16 September 2025 00:56:32 +0000 (0:00:00.599) 0:00:05.801 ***** 2025-09-16 00:59:12.875327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.875346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.875476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.875501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-16 00:59:12.875526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-16 00:59:12.875546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-16 00:59:12.875563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.875575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.875586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.875597 | orchestrator | 2025-09-16 00:59:12.875609 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-16 00:59:12.875620 | orchestrator | Tuesday 16 September 2025 00:56:35 +0000 (0:00:03.216) 0:00:09.018 ***** 2025-09-16 00:59:12.875632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-16 00:59:12.875651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:59:12.875669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-16 00:59:12.875681 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:59:12.875697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-16 00:59:12.875710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:59:12.875722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-16 00:59:12.875734 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:59:12.875753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-16 00:59:12.875772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:59:12.875788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-16 00:59:12.875800 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:59:12.875811 | orchestrator | 2025-09-16 00:59:12.875822 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-16 00:59:12.875833 | orchestrator | Tuesday 16 September 2025 00:56:36 +0000 (0:00:00.770) 0:00:09.789 ***** 2025-09-16 00:59:12.875845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-16 00:59:12.875857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:59:12.875892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-16 00:59:12.875910 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:59:12.875931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-16 00:59:12.875948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:59:12.875960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-16 00:59:12.875971 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:59:12.875983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-16 00:59:12.875995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:59:12.876019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-16 00:59:12.876031 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:59:12.876042 | orchestrator | 2025-09-16 00:59:12.876053 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-16 00:59:12.876064 | orchestrator | Tuesday 16 September 2025 00:56:36 +0000 (0:00:00.739) 0:00:10.529 ***** 2025-09-16 00:59:12.876081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.876094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.876107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.876131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-16 00:59:12.876144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-16 00:59:12.876160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-16 00:59:12.876172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.876183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.876195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.876212 | orchestrator | 2025-09-16 00:59:12.876224 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-16 00:59:12.876235 | orchestrator | Tuesday 16 September 2025 00:56:40 +0000 (0:00:03.259) 0:00:13.789 ***** 2025-09-16 00:59:12.876254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.876266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:59:12.876287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.876299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:59:12.876311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.876329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:59:12.876348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.876365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.876377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.876388 | orchestrator | 2025-09-16 00:59:12.876399 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-16 00:59:12.876410 | orchestrator | Tuesday 16 September 2025 00:56:45 +0000 (0:00:05.139) 0:00:18.928 ***** 2025-09-16 00:59:12.876421 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:59:12.876432 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:59:12.876443 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:59:12.876454 | orchestrator | 2025-09-16 00:59:12.876465 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-16 00:59:12.876475 | orchestrator | Tuesday 16 September 2025 00:56:46 +0000 (0:00:01.432) 0:00:20.361 ***** 2025-09-16 00:59:12.876486 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:59:12.876497 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:59:12.876514 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:59:12.876525 | orchestrator | 2025-09-16 00:59:12.876536 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-16 00:59:12.876547 | orchestrator | Tuesday 16 September 2025 00:56:47 +0000 (0:00:00.567) 0:00:20.928 ***** 2025-09-16 00:59:12.876558 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:59:12.876569 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:59:12.876579 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:59:12.876590 | orchestrator | 2025-09-16 00:59:12.876601 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-16 00:59:12.876611 | orchestrator | Tuesday 16 September 2025 00:56:47 +0000 (0:00:00.307) 0:00:21.236 ***** 2025-09-16 00:59:12.876622 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:59:12.876633 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:59:12.876644 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:59:12.876655 | orchestrator | 2025-09-16 00:59:12.876665 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-16 00:59:12.876676 | orchestrator | Tuesday 16 September 2025 00:56:48 +0000 (0:00:00.432) 0:00:21.669 ***** 2025-09-16 00:59:12.876688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.876706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:59:12.876724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.876736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:59:12.876755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.876767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-16 00:59:12.876787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.876799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.876815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.876833 | orchestrator | 2025-09-16 00:59:12.876844 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-16 00:59:12.876855 | orchestrator | Tuesday 16 September 2025 00:56:50 +0000 (0:00:02.197) 0:00:23.867 ***** 2025-09-16 00:59:12.876881 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:59:12.876892 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:59:12.876903 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:59:12.876914 | orchestrator | 2025-09-16 00:59:12.876925 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-16 00:59:12.876936 | orchestrator | Tuesday 16 September 2025 00:56:50 +0000 (0:00:00.320) 0:00:24.187 ***** 2025-09-16 00:59:12.876946 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-16 00:59:12.876958 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-16 00:59:12.876968 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-16 00:59:12.876979 | orchestrator | 2025-09-16 00:59:12.876990 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-16 00:59:12.877001 | orchestrator | Tuesday 16 September 2025 00:56:52 +0000 (0:00:01.501) 0:00:25.689 ***** 2025-09-16 00:59:12.877012 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-16 00:59:12.877023 | orchestrator | 2025-09-16 00:59:12.877033 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-16 00:59:12.877044 | orchestrator | Tuesday 16 September 2025 00:56:52 +0000 (0:00:00.874) 0:00:26.564 ***** 2025-09-16 00:59:12.877055 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:59:12.877065 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:59:12.877076 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:59:12.877087 | orchestrator | 2025-09-16 00:59:12.877097 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-16 00:59:12.877108 | orchestrator | Tuesday 16 September 2025 00:56:53 +0000 (0:00:00.746) 0:00:27.311 ***** 2025-09-16 00:59:12.877119 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-16 00:59:12.877130 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-16 00:59:12.877140 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-16 00:59:12.877151 | orchestrator | 2025-09-16 00:59:12.877162 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-16 00:59:12.877173 | orchestrator | Tuesday 16 September 2025 00:56:54 +0000 (0:00:01.012) 0:00:28.323 ***** 2025-09-16 00:59:12.877184 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:59:12.877195 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:59:12.877205 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:59:12.877216 | orchestrator | 2025-09-16 00:59:12.877227 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-16 00:59:12.877238 | orchestrator | Tuesday 16 September 2025 00:56:55 +0000 (0:00:00.280) 0:00:28.604 ***** 2025-09-16 00:59:12.877249 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-16 00:59:12.877260 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-16 00:59:12.877270 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-16 00:59:12.877281 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-16 00:59:12.877292 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-16 00:59:12.877309 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-16 00:59:12.877321 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-16 00:59:12.877332 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-16 00:59:12.877343 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-16 00:59:12.877361 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-16 00:59:12.877372 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-16 00:59:12.877382 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-16 00:59:12.877393 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-16 00:59:12.877405 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-16 00:59:12.877416 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-16 00:59:12.877426 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-16 00:59:12.877442 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-16 00:59:12.877454 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-16 00:59:12.877465 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-16 00:59:12.877476 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-16 00:59:12.877487 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-16 00:59:12.877497 | orchestrator | 2025-09-16 00:59:12.877508 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-16 00:59:12.877519 | orchestrator | Tuesday 16 September 2025 00:57:03 +0000 (0:00:08.650) 0:00:37.255 ***** 2025-09-16 00:59:12.877530 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-16 00:59:12.877541 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-16 00:59:12.877552 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-16 00:59:12.877562 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-16 00:59:12.877573 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-16 00:59:12.877584 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-16 00:59:12.877595 | orchestrator | 2025-09-16 00:59:12.877606 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-16 00:59:12.877617 | orchestrator | Tuesday 16 September 2025 00:57:06 +0000 (0:00:03.003) 0:00:40.258 ***** 2025-09-16 00:59:12.877628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.877649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.877673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-16 00:59:12.877686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-16 00:59:12.877698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-16 00:59:12.877709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-16 00:59:12.877721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.877746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.877757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-16 00:59:12.877769 | orchestrator | 2025-09-16 00:59:12.877780 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-16 00:59:12.877791 | orchestrator | Tuesday 16 September 2025 00:57:08 +0000 (0:00:02.286) 0:00:42.545 ***** 2025-09-16 00:59:12.877807 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:59:12.877818 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:59:12.877829 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:59:12.877839 | orchestrator | 2025-09-16 00:59:12.877850 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-16 00:59:12.877861 | orchestrator | Tuesday 16 September 2025 00:57:09 +0000 (0:00:00.282) 0:00:42.827 ***** 2025-09-16 00:59:12.877927 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:59:12.877938 | orchestrator | 2025-09-16 00:59:12.877949 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-16 00:59:12.877960 | orchestrator | Tuesday 16 September 2025 00:57:11 +0000 (0:00:02.301) 0:00:45.129 ***** 2025-09-16 00:59:12.877970 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:59:12.877981 | orchestrator | 2025-09-16 00:59:12.877992 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-16 00:59:12.878003 | orchestrator | Tuesday 16 September 2025 00:57:13 +0000 (0:00:02.138) 0:00:47.267 ***** 2025-09-16 00:59:12.878013 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:59:12.878076 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:59:12.878087 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:59:12.878098 | orchestrator | 2025-09-16 00:59:12.878109 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-16 00:59:12.878119 | orchestrator | Tuesday 16 September 2025 00:57:14 +0000 (0:00:00.893) 0:00:48.160 ***** 2025-09-16 00:59:12.878130 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:59:12.878141 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:59:12.878152 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:59:12.878163 | orchestrator | 2025-09-16 00:59:12.878173 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-16 00:59:12.878184 | orchestrator | Tuesday 16 September 2025 00:57:15 +0000 (0:00:00.498) 0:00:48.658 ***** 2025-09-16 00:59:12.878195 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:59:12.878206 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:59:12.878217 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:59:12.878228 | orchestrator | 2025-09-16 00:59:12.878247 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-16 00:59:12.878258 | orchestrator | Tuesday 16 September 2025 00:57:15 +0000 (0:00:00.346) 0:00:49.005 ***** 2025-09-16 00:59:12.878269 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:59:12.878279 | orchestrator | 2025-09-16 00:59:12.878290 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-16 00:59:12.878301 | orchestrator | Tuesday 16 September 2025 00:57:29 +0000 (0:00:13.797) 0:01:02.802 ***** 2025-09-16 00:59:12.878312 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:59:12.878322 | orchestrator | 2025-09-16 00:59:12.878333 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-16 00:59:12.878344 | orchestrator | Tuesday 16 September 2025 00:57:39 +0000 (0:00:10.038) 0:01:12.841 ***** 2025-09-16 00:59:12.878355 | orchestrator | 2025-09-16 00:59:12.878366 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-16 00:59:12.878377 | orchestrator | Tuesday 16 September 2025 00:57:39 +0000 (0:00:00.062) 0:01:12.903 ***** 2025-09-16 00:59:12.878387 | orchestrator | 2025-09-16 00:59:12.878398 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-16 00:59:12.878409 | orchestrator | Tuesday 16 September 2025 00:57:39 +0000 (0:00:00.064) 0:01:12.968 ***** 2025-09-16 00:59:12.878420 | orchestrator | 2025-09-16 00:59:12.878431 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-16 00:59:12.878442 | orchestrator | Tuesday 16 September 2025 00:57:39 +0000 (0:00:00.066) 0:01:13.035 ***** 2025-09-16 00:59:12.878451 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:59:12.878461 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:59:12.878471 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:59:12.878480 | orchestrator | 2025-09-16 00:59:12.878490 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-16 00:59:12.878499 | orchestrator | Tuesday 16 September 2025 00:58:04 +0000 (0:00:25.048) 0:01:38.083 ***** 2025-09-16 00:59:12.878509 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:59:12.878519 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:59:12.878528 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:59:12.878538 | orchestrator | 2025-09-16 00:59:12.878547 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-16 00:59:12.878557 | orchestrator | Tuesday 16 September 2025 00:58:14 +0000 (0:00:09.675) 0:01:47.758 ***** 2025-09-16 00:59:12.878567 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:59:12.878577 | orchestrator | changed: [testbed-node-1] 2025-09-16 00:59:12.878594 | orchestrator | changed: [testbed-node-2] 2025-09-16 00:59:12.878604 | orchestrator | 2025-09-16 00:59:12.878613 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-16 00:59:12.878623 | orchestrator | Tuesday 16 September 2025 00:58:21 +0000 (0:00:06.865) 0:01:54.624 ***** 2025-09-16 00:59:12.878632 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 00:59:12.878642 | orchestrator | 2025-09-16 00:59:12.878651 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-16 00:59:12.878661 | orchestrator | Tuesday 16 September 2025 00:58:21 +0000 (0:00:00.684) 0:01:55.308 ***** 2025-09-16 00:59:12.878670 | orchestrator | ok: [testbed-node-1] 2025-09-16 00:59:12.878680 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:59:12.878689 | orchestrator | ok: [testbed-node-2] 2025-09-16 00:59:12.878699 | orchestrator | 2025-09-16 00:59:12.878708 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-16 00:59:12.878718 | orchestrator | Tuesday 16 September 2025 00:58:22 +0000 (0:00:00.766) 0:01:56.074 ***** 2025-09-16 00:59:12.878727 | orchestrator | changed: [testbed-node-0] 2025-09-16 00:59:12.878737 | orchestrator | 2025-09-16 00:59:12.878746 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-16 00:59:12.878756 | orchestrator | Tuesday 16 September 2025 00:58:24 +0000 (0:00:01.767) 0:01:57.842 ***** 2025-09-16 00:59:12.878776 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-16 00:59:12.878786 | orchestrator | 2025-09-16 00:59:12.878795 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-16 00:59:12.878821 | orchestrator | Tuesday 16 September 2025 00:58:35 +0000 (0:00:11.084) 0:02:08.927 ***** 2025-09-16 00:59:12.878831 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-16 00:59:12.878841 | orchestrator | 2025-09-16 00:59:12.878850 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-16 00:59:12.878860 | orchestrator | Tuesday 16 September 2025 00:58:58 +0000 (0:00:23.289) 0:02:32.216 ***** 2025-09-16 00:59:12.878887 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-16 00:59:12.878897 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-16 00:59:12.878906 | orchestrator | 2025-09-16 00:59:12.878916 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-16 00:59:12.878926 | orchestrator | Tuesday 16 September 2025 00:59:05 +0000 (0:00:07.191) 0:02:39.408 ***** 2025-09-16 00:59:12.878935 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:59:12.878945 | orchestrator | 2025-09-16 00:59:12.878954 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-16 00:59:12.878964 | orchestrator | Tuesday 16 September 2025 00:59:05 +0000 (0:00:00.119) 0:02:39.528 ***** 2025-09-16 00:59:12.878974 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:59:12.878983 | orchestrator | 2025-09-16 00:59:12.878993 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-16 00:59:12.879002 | orchestrator | Tuesday 16 September 2025 00:59:06 +0000 (0:00:00.098) 0:02:39.627 ***** 2025-09-16 00:59:12.879012 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:59:12.879022 | orchestrator | 2025-09-16 00:59:12.879031 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-16 00:59:12.879041 | orchestrator | Tuesday 16 September 2025 00:59:06 +0000 (0:00:00.109) 0:02:39.736 ***** 2025-09-16 00:59:12.879050 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:59:12.879060 | orchestrator | 2025-09-16 00:59:12.879069 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-16 00:59:12.879079 | orchestrator | Tuesday 16 September 2025 00:59:06 +0000 (0:00:00.384) 0:02:40.120 ***** 2025-09-16 00:59:12.879089 | orchestrator | ok: [testbed-node-0] 2025-09-16 00:59:12.879098 | orchestrator | 2025-09-16 00:59:12.879108 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-16 00:59:12.879117 | orchestrator | Tuesday 16 September 2025 00:59:09 +0000 (0:00:03.203) 0:02:43.324 ***** 2025-09-16 00:59:12.879127 | orchestrator | skipping: [testbed-node-0] 2025-09-16 00:59:12.879137 | orchestrator | skipping: [testbed-node-1] 2025-09-16 00:59:12.879146 | orchestrator | skipping: [testbed-node-2] 2025-09-16 00:59:12.879156 | orchestrator | 2025-09-16 00:59:12.879166 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 00:59:12.879176 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-16 00:59:12.879186 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-16 00:59:12.879196 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-16 00:59:12.879206 | orchestrator | 2025-09-16 00:59:12.879216 | orchestrator | 2025-09-16 00:59:12.879225 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 00:59:12.879235 | orchestrator | Tuesday 16 September 2025 00:59:10 +0000 (0:00:00.714) 0:02:44.038 ***** 2025-09-16 00:59:12.879245 | orchestrator | =============================================================================== 2025-09-16 00:59:12.879254 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 25.05s 2025-09-16 00:59:12.879271 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.29s 2025-09-16 00:59:12.879281 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.80s 2025-09-16 00:59:12.879290 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.08s 2025-09-16 00:59:12.879300 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.04s 2025-09-16 00:59:12.879315 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.68s 2025-09-16 00:59:12.879325 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.65s 2025-09-16 00:59:12.879334 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.19s 2025-09-16 00:59:12.879344 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.87s 2025-09-16 00:59:12.879353 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.14s 2025-09-16 00:59:12.879363 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.26s 2025-09-16 00:59:12.879372 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.22s 2025-09-16 00:59:12.879382 | orchestrator | keystone : Creating default user role ----------------------------------- 3.20s 2025-09-16 00:59:12.879391 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.00s 2025-09-16 00:59:12.879401 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.30s 2025-09-16 00:59:12.879410 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.29s 2025-09-16 00:59:12.879420 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.20s 2025-09-16 00:59:12.879429 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.14s 2025-09-16 00:59:12.879439 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.77s 2025-09-16 00:59:12.879453 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.73s 2025-09-16 00:59:12.879463 | orchestrator | 2025-09-16 00:59:12 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:12.879473 | orchestrator | 2025-09-16 00:59:12 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:12.879483 | orchestrator | 2025-09-16 00:59:12 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:15.902299 | orchestrator | 2025-09-16 00:59:15 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:15.902412 | orchestrator | 2025-09-16 00:59:15 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state STARTED 2025-09-16 00:59:15.902713 | orchestrator | 2025-09-16 00:59:15 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:15.903273 | orchestrator | 2025-09-16 00:59:15 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:15.903989 | orchestrator | 2025-09-16 00:59:15 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:15.904021 | orchestrator | 2025-09-16 00:59:15 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:18.944703 | orchestrator | 2025-09-16 00:59:18 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:18.947370 | orchestrator | 2025-09-16 00:59:18 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state STARTED 2025-09-16 00:59:18.949677 | orchestrator | 2025-09-16 00:59:18 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:18.951996 | orchestrator | 2025-09-16 00:59:18 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:18.953729 | orchestrator | 2025-09-16 00:59:18 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:18.954136 | orchestrator | 2025-09-16 00:59:18 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:21.985749 | orchestrator | 2025-09-16 00:59:21 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:21.986632 | orchestrator | 2025-09-16 00:59:21 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state STARTED 2025-09-16 00:59:21.988420 | orchestrator | 2025-09-16 00:59:21 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:21.989952 | orchestrator | 2025-09-16 00:59:21 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:21.991296 | orchestrator | 2025-09-16 00:59:21 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:21.991385 | orchestrator | 2025-09-16 00:59:21 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:25.021072 | orchestrator | 2025-09-16 00:59:25 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:25.022507 | orchestrator | 2025-09-16 00:59:25 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state STARTED 2025-09-16 00:59:25.023903 | orchestrator | 2025-09-16 00:59:25 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:25.027022 | orchestrator | 2025-09-16 00:59:25 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:25.029387 | orchestrator | 2025-09-16 00:59:25 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:25.029436 | orchestrator | 2025-09-16 00:59:25 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:28.056260 | orchestrator | 2025-09-16 00:59:28 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:28.056665 | orchestrator | 2025-09-16 00:59:28 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state STARTED 2025-09-16 00:59:28.057947 | orchestrator | 2025-09-16 00:59:28 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:28.059720 | orchestrator | 2025-09-16 00:59:28 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:28.060214 | orchestrator | 2025-09-16 00:59:28 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:28.060458 | orchestrator | 2025-09-16 00:59:28 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:31.099357 | orchestrator | 2025-09-16 00:59:31 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:31.099777 | orchestrator | 2025-09-16 00:59:31 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state STARTED 2025-09-16 00:59:31.100477 | orchestrator | 2025-09-16 00:59:31 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:31.102399 | orchestrator | 2025-09-16 00:59:31 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:31.102494 | orchestrator | 2025-09-16 00:59:31 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:31.102508 | orchestrator | 2025-09-16 00:59:31 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:34.123282 | orchestrator | 2025-09-16 00:59:34 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:34.123436 | orchestrator | 2025-09-16 00:59:34 | INFO  | Task cd42ac3a-5271-48ce-be4b-d7f2d88bef54 is in state SUCCESS 2025-09-16 00:59:34.123550 | orchestrator | 2025-09-16 00:59:34 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:34.124142 | orchestrator | 2025-09-16 00:59:34 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:34.124567 | orchestrator | 2025-09-16 00:59:34 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:34.124590 | orchestrator | 2025-09-16 00:59:34 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:37.155676 | orchestrator | 2025-09-16 00:59:37 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:37.156283 | orchestrator | 2025-09-16 00:59:37 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 00:59:37.157465 | orchestrator | 2025-09-16 00:59:37 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:37.158452 | orchestrator | 2025-09-16 00:59:37 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:37.159387 | orchestrator | 2025-09-16 00:59:37 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:37.159876 | orchestrator | 2025-09-16 00:59:37 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:40.198416 | orchestrator | 2025-09-16 00:59:40 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:40.198751 | orchestrator | 2025-09-16 00:59:40 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 00:59:40.199526 | orchestrator | 2025-09-16 00:59:40 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:40.203004 | orchestrator | 2025-09-16 00:59:40 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:40.203457 | orchestrator | 2025-09-16 00:59:40 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:40.203475 | orchestrator | 2025-09-16 00:59:40 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:43.231279 | orchestrator | 2025-09-16 00:59:43 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:43.232494 | orchestrator | 2025-09-16 00:59:43 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 00:59:43.234146 | orchestrator | 2025-09-16 00:59:43 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:43.235814 | orchestrator | 2025-09-16 00:59:43 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:43.237119 | orchestrator | 2025-09-16 00:59:43 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:43.237379 | orchestrator | 2025-09-16 00:59:43 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:46.266381 | orchestrator | 2025-09-16 00:59:46 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:46.266493 | orchestrator | 2025-09-16 00:59:46 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 00:59:46.266982 | orchestrator | 2025-09-16 00:59:46 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:46.267556 | orchestrator | 2025-09-16 00:59:46 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:46.268467 | orchestrator | 2025-09-16 00:59:46 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:46.268487 | orchestrator | 2025-09-16 00:59:46 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:49.297790 | orchestrator | 2025-09-16 00:59:49 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:49.298280 | orchestrator | 2025-09-16 00:59:49 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 00:59:49.299842 | orchestrator | 2025-09-16 00:59:49 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:49.300577 | orchestrator | 2025-09-16 00:59:49 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:49.301305 | orchestrator | 2025-09-16 00:59:49 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:49.301494 | orchestrator | 2025-09-16 00:59:49 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:52.381758 | orchestrator | 2025-09-16 00:59:52 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:52.381855 | orchestrator | 2025-09-16 00:59:52 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 00:59:52.381872 | orchestrator | 2025-09-16 00:59:52 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:52.381884 | orchestrator | 2025-09-16 00:59:52 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:52.381895 | orchestrator | 2025-09-16 00:59:52 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:52.381906 | orchestrator | 2025-09-16 00:59:52 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:55.355508 | orchestrator | 2025-09-16 00:59:55 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:55.355608 | orchestrator | 2025-09-16 00:59:55 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 00:59:55.356400 | orchestrator | 2025-09-16 00:59:55 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:55.356923 | orchestrator | 2025-09-16 00:59:55 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:55.357565 | orchestrator | 2025-09-16 00:59:55 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:55.357598 | orchestrator | 2025-09-16 00:59:55 | INFO  | Wait 1 second(s) until the next check 2025-09-16 00:59:58.395660 | orchestrator | 2025-09-16 00:59:58 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 00:59:58.396466 | orchestrator | 2025-09-16 00:59:58 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 00:59:58.396522 | orchestrator | 2025-09-16 00:59:58 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 00:59:58.396991 | orchestrator | 2025-09-16 00:59:58 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 00:59:58.397528 | orchestrator | 2025-09-16 00:59:58 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 00:59:58.397550 | orchestrator | 2025-09-16 00:59:58 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:01.437791 | orchestrator | 2025-09-16 01:00:01 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:01.437891 | orchestrator | 2025-09-16 01:00:01 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:01.437906 | orchestrator | 2025-09-16 01:00:01 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:01.437918 | orchestrator | 2025-09-16 01:00:01 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 01:00:01.437929 | orchestrator | 2025-09-16 01:00:01 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:01.437940 | orchestrator | 2025-09-16 01:00:01 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:04.451915 | orchestrator | 2025-09-16 01:00:04 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:04.452195 | orchestrator | 2025-09-16 01:00:04 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:04.452938 | orchestrator | 2025-09-16 01:00:04 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:04.454617 | orchestrator | 2025-09-16 01:00:04 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 01:00:04.455139 | orchestrator | 2025-09-16 01:00:04 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:04.455171 | orchestrator | 2025-09-16 01:00:04 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:07.484180 | orchestrator | 2025-09-16 01:00:07 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:07.487181 | orchestrator | 2025-09-16 01:00:07 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:07.487692 | orchestrator | 2025-09-16 01:00:07 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:07.488402 | orchestrator | 2025-09-16 01:00:07 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 01:00:07.489153 | orchestrator | 2025-09-16 01:00:07 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:07.489176 | orchestrator | 2025-09-16 01:00:07 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:10.512630 | orchestrator | 2025-09-16 01:00:10 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:10.514085 | orchestrator | 2025-09-16 01:00:10 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:10.514672 | orchestrator | 2025-09-16 01:00:10 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:10.515407 | orchestrator | 2025-09-16 01:00:10 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state STARTED 2025-09-16 01:00:10.516835 | orchestrator | 2025-09-16 01:00:10 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:10.516856 | orchestrator | 2025-09-16 01:00:10 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:13.548404 | orchestrator | 2025-09-16 01:00:13 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:13.551137 | orchestrator | 2025-09-16 01:00:13 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:13.552682 | orchestrator | 2025-09-16 01:00:13 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:13.553570 | orchestrator | 2025-09-16 01:00:13 | INFO  | Task 9f3f14ba-343e-45a8-b92f-5956ef6794ae is in state SUCCESS 2025-09-16 01:00:13.554450 | orchestrator | 2025-09-16 01:00:13.554478 | orchestrator | 2025-09-16 01:00:13.554492 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 01:00:13.554506 | orchestrator | 2025-09-16 01:00:13.554519 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 01:00:13.554532 | orchestrator | Tuesday 16 September 2025 00:58:50 +0000 (0:00:00.251) 0:00:00.251 ***** 2025-09-16 01:00:13.554546 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:00:13.554559 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:00:13.554570 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:00:13.554581 | orchestrator | ok: [testbed-manager] 2025-09-16 01:00:13.554591 | orchestrator | ok: [testbed-node-3] 2025-09-16 01:00:13.554602 | orchestrator | ok: [testbed-node-4] 2025-09-16 01:00:13.554636 | orchestrator | ok: [testbed-node-5] 2025-09-16 01:00:13.554647 | orchestrator | 2025-09-16 01:00:13.554658 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 01:00:13.554669 | orchestrator | Tuesday 16 September 2025 00:58:51 +0000 (0:00:00.796) 0:00:01.047 ***** 2025-09-16 01:00:13.554680 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-16 01:00:13.554691 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-16 01:00:13.554702 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-16 01:00:13.554713 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-16 01:00:13.554724 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-16 01:00:13.554736 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-16 01:00:13.554747 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-16 01:00:13.554758 | orchestrator | 2025-09-16 01:00:13.554769 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-16 01:00:13.554780 | orchestrator | 2025-09-16 01:00:13.554791 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-16 01:00:13.554802 | orchestrator | Tuesday 16 September 2025 00:58:51 +0000 (0:00:00.776) 0:00:01.824 ***** 2025-09-16 01:00:13.554814 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 01:00:13.554826 | orchestrator | 2025-09-16 01:00:13.554837 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-16 01:00:13.554848 | orchestrator | Tuesday 16 September 2025 00:58:53 +0000 (0:00:01.916) 0:00:03.740 ***** 2025-09-16 01:00:13.554859 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-16 01:00:13.554870 | orchestrator | 2025-09-16 01:00:13.554881 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-16 01:00:13.554892 | orchestrator | Tuesday 16 September 2025 00:59:05 +0000 (0:00:11.751) 0:00:15.492 ***** 2025-09-16 01:00:13.554903 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-16 01:00:13.554915 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-16 01:00:13.554926 | orchestrator | 2025-09-16 01:00:13.554937 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-16 01:00:13.554960 | orchestrator | Tuesday 16 September 2025 00:59:11 +0000 (0:00:06.437) 0:00:21.930 ***** 2025-09-16 01:00:13.554971 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-16 01:00:13.554983 | orchestrator | 2025-09-16 01:00:13.554994 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-16 01:00:13.555025 | orchestrator | Tuesday 16 September 2025 00:59:15 +0000 (0:00:04.079) 0:00:26.009 ***** 2025-09-16 01:00:13.555037 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-16 01:00:13.555048 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-16 01:00:13.555059 | orchestrator | 2025-09-16 01:00:13.555070 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-16 01:00:13.555080 | orchestrator | Tuesday 16 September 2025 00:59:20 +0000 (0:00:04.022) 0:00:30.032 ***** 2025-09-16 01:00:13.555091 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-16 01:00:13.555102 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-16 01:00:13.555113 | orchestrator | 2025-09-16 01:00:13.555124 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-16 01:00:13.555135 | orchestrator | Tuesday 16 September 2025 00:59:27 +0000 (0:00:07.371) 0:00:37.405 ***** 2025-09-16 01:00:13.555146 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-16 01:00:13.555157 | orchestrator | 2025-09-16 01:00:13.555167 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 01:00:13.555186 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:00:13.555198 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:00:13.555209 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:00:13.555220 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:00:13.555231 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:00:13.555253 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:00:13.555265 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:00:13.555275 | orchestrator | 2025-09-16 01:00:13.555286 | orchestrator | 2025-09-16 01:00:13.555297 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 01:00:13.555308 | orchestrator | Tuesday 16 September 2025 00:59:32 +0000 (0:00:05.375) 0:00:42.781 ***** 2025-09-16 01:00:13.555319 | orchestrator | =============================================================================== 2025-09-16 01:00:13.555330 | orchestrator | service-ks-register : ceph-rgw | Creating services --------------------- 11.75s 2025-09-16 01:00:13.555340 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.37s 2025-09-16 01:00:13.555351 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.44s 2025-09-16 01:00:13.555362 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.38s 2025-09-16 01:00:13.555373 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 4.08s 2025-09-16 01:00:13.555383 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.02s 2025-09-16 01:00:13.555394 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.92s 2025-09-16 01:00:13.555405 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.80s 2025-09-16 01:00:13.555416 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2025-09-16 01:00:13.555426 | orchestrator | 2025-09-16 01:00:13.555437 | orchestrator | 2025-09-16 01:00:13.555448 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-16 01:00:13.555458 | orchestrator | 2025-09-16 01:00:13.555469 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-16 01:00:13.555480 | orchestrator | Tuesday 16 September 2025 00:58:43 +0000 (0:00:00.244) 0:00:00.244 ***** 2025-09-16 01:00:13.555490 | orchestrator | changed: [testbed-manager] 2025-09-16 01:00:13.555501 | orchestrator | 2025-09-16 01:00:13.555512 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-16 01:00:13.555522 | orchestrator | Tuesday 16 September 2025 00:58:44 +0000 (0:00:01.112) 0:00:01.357 ***** 2025-09-16 01:00:13.555533 | orchestrator | changed: [testbed-manager] 2025-09-16 01:00:13.555544 | orchestrator | 2025-09-16 01:00:13.555554 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-16 01:00:13.555565 | orchestrator | Tuesday 16 September 2025 00:58:45 +0000 (0:00:00.914) 0:00:02.272 ***** 2025-09-16 01:00:13.555576 | orchestrator | changed: [testbed-manager] 2025-09-16 01:00:13.555586 | orchestrator | 2025-09-16 01:00:13.555597 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-16 01:00:13.555608 | orchestrator | Tuesday 16 September 2025 00:58:46 +0000 (0:00:00.952) 0:00:03.224 ***** 2025-09-16 01:00:13.555619 | orchestrator | changed: [testbed-manager] 2025-09-16 01:00:13.555629 | orchestrator | 2025-09-16 01:00:13.555646 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-16 01:00:13.555657 | orchestrator | Tuesday 16 September 2025 00:58:47 +0000 (0:00:01.056) 0:00:04.280 ***** 2025-09-16 01:00:13.555668 | orchestrator | changed: [testbed-manager] 2025-09-16 01:00:13.555678 | orchestrator | 2025-09-16 01:00:13.555694 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-16 01:00:13.555705 | orchestrator | Tuesday 16 September 2025 00:58:48 +0000 (0:00:01.069) 0:00:05.350 ***** 2025-09-16 01:00:13.555716 | orchestrator | changed: [testbed-manager] 2025-09-16 01:00:13.555727 | orchestrator | 2025-09-16 01:00:13.555738 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-16 01:00:13.555748 | orchestrator | Tuesday 16 September 2025 00:58:49 +0000 (0:00:01.038) 0:00:06.389 ***** 2025-09-16 01:00:13.555759 | orchestrator | changed: [testbed-manager] 2025-09-16 01:00:13.555770 | orchestrator | 2025-09-16 01:00:13.555781 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-16 01:00:13.555791 | orchestrator | Tuesday 16 September 2025 00:58:50 +0000 (0:00:01.068) 0:00:07.457 ***** 2025-09-16 01:00:13.555802 | orchestrator | changed: [testbed-manager] 2025-09-16 01:00:13.555813 | orchestrator | 2025-09-16 01:00:13.555823 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-16 01:00:13.555834 | orchestrator | Tuesday 16 September 2025 00:58:51 +0000 (0:00:00.976) 0:00:08.434 ***** 2025-09-16 01:00:13.555845 | orchestrator | changed: [testbed-manager] 2025-09-16 01:00:13.555855 | orchestrator | 2025-09-16 01:00:13.555866 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-16 01:00:13.555877 | orchestrator | Tuesday 16 September 2025 00:59:46 +0000 (0:00:54.777) 0:01:03.211 ***** 2025-09-16 01:00:13.555887 | orchestrator | skipping: [testbed-manager] 2025-09-16 01:00:13.555898 | orchestrator | 2025-09-16 01:00:13.555909 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-16 01:00:13.555919 | orchestrator | 2025-09-16 01:00:13.555930 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-16 01:00:13.555941 | orchestrator | Tuesday 16 September 2025 00:59:46 +0000 (0:00:00.151) 0:01:03.363 ***** 2025-09-16 01:00:13.555951 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:00:13.555962 | orchestrator | 2025-09-16 01:00:13.555973 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-16 01:00:13.555983 | orchestrator | 2025-09-16 01:00:13.555994 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-16 01:00:13.556035 | orchestrator | Tuesday 16 September 2025 00:59:58 +0000 (0:00:11.804) 0:01:15.167 ***** 2025-09-16 01:00:13.556048 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:00:13.556059 | orchestrator | 2025-09-16 01:00:13.556069 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-16 01:00:13.556080 | orchestrator | 2025-09-16 01:00:13.556091 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-16 01:00:13.556101 | orchestrator | Tuesday 16 September 2025 01:00:09 +0000 (0:00:11.292) 0:01:26.459 ***** 2025-09-16 01:00:13.556112 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:00:13.556123 | orchestrator | 2025-09-16 01:00:13.556140 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 01:00:13.556152 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-16 01:00:13.556163 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:00:13.556265 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:00:13.556278 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:00:13.556298 | orchestrator | 2025-09-16 01:00:13.556309 | orchestrator | 2025-09-16 01:00:13.556320 | orchestrator | 2025-09-16 01:00:13.556330 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 01:00:13.556341 | orchestrator | Tuesday 16 September 2025 01:00:11 +0000 (0:00:01.216) 0:01:27.676 ***** 2025-09-16 01:00:13.556430 | orchestrator | =============================================================================== 2025-09-16 01:00:13.556443 | orchestrator | Create admin user ------------------------------------------------------ 54.78s 2025-09-16 01:00:13.556453 | orchestrator | Restart ceph manager service ------------------------------------------- 24.31s 2025-09-16 01:00:13.556464 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.11s 2025-09-16 01:00:13.556475 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.07s 2025-09-16 01:00:13.556486 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.07s 2025-09-16 01:00:13.556496 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.06s 2025-09-16 01:00:13.556507 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.04s 2025-09-16 01:00:13.556518 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.98s 2025-09-16 01:00:13.556529 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.95s 2025-09-16 01:00:13.556539 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.91s 2025-09-16 01:00:13.556550 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2025-09-16 01:00:13.556567 | orchestrator | 2025-09-16 01:00:13 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:13.556578 | orchestrator | 2025-09-16 01:00:13 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:16.576710 | orchestrator | 2025-09-16 01:00:16 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:16.576980 | orchestrator | 2025-09-16 01:00:16 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:16.577832 | orchestrator | 2025-09-16 01:00:16 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:16.578610 | orchestrator | 2025-09-16 01:00:16 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:16.578653 | orchestrator | 2025-09-16 01:00:16 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:19.627465 | orchestrator | 2025-09-16 01:00:19 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:19.628090 | orchestrator | 2025-09-16 01:00:19 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:19.629566 | orchestrator | 2025-09-16 01:00:19 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:19.630940 | orchestrator | 2025-09-16 01:00:19 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:19.630964 | orchestrator | 2025-09-16 01:00:19 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:22.663947 | orchestrator | 2025-09-16 01:00:22 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:22.664318 | orchestrator | 2025-09-16 01:00:22 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:22.666457 | orchestrator | 2025-09-16 01:00:22 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:22.669203 | orchestrator | 2025-09-16 01:00:22 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:22.669227 | orchestrator | 2025-09-16 01:00:22 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:25.720443 | orchestrator | 2025-09-16 01:00:25 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:25.720689 | orchestrator | 2025-09-16 01:00:25 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:25.721394 | orchestrator | 2025-09-16 01:00:25 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:25.722009 | orchestrator | 2025-09-16 01:00:25 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:25.722200 | orchestrator | 2025-09-16 01:00:25 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:28.742645 | orchestrator | 2025-09-16 01:00:28 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:28.742948 | orchestrator | 2025-09-16 01:00:28 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:28.743657 | orchestrator | 2025-09-16 01:00:28 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:28.744548 | orchestrator | 2025-09-16 01:00:28 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:28.744576 | orchestrator | 2025-09-16 01:00:28 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:31.786135 | orchestrator | 2025-09-16 01:00:31 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:31.787470 | orchestrator | 2025-09-16 01:00:31 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:31.790210 | orchestrator | 2025-09-16 01:00:31 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:31.792950 | orchestrator | 2025-09-16 01:00:31 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:31.793044 | orchestrator | 2025-09-16 01:00:31 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:34.884969 | orchestrator | 2025-09-16 01:00:34 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:34.885120 | orchestrator | 2025-09-16 01:00:34 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:34.885300 | orchestrator | 2025-09-16 01:00:34 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:34.885798 | orchestrator | 2025-09-16 01:00:34 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:34.885824 | orchestrator | 2025-09-16 01:00:34 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:37.918991 | orchestrator | 2025-09-16 01:00:37 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:37.919149 | orchestrator | 2025-09-16 01:00:37 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:37.919186 | orchestrator | 2025-09-16 01:00:37 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:37.919199 | orchestrator | 2025-09-16 01:00:37 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:37.919210 | orchestrator | 2025-09-16 01:00:37 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:40.954464 | orchestrator | 2025-09-16 01:00:40 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:40.954817 | orchestrator | 2025-09-16 01:00:40 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:40.957574 | orchestrator | 2025-09-16 01:00:40 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:40.960101 | orchestrator | 2025-09-16 01:00:40 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:40.960157 | orchestrator | 2025-09-16 01:00:40 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:44.008004 | orchestrator | 2025-09-16 01:00:44 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:44.009501 | orchestrator | 2025-09-16 01:00:44 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:44.011186 | orchestrator | 2025-09-16 01:00:44 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:44.012989 | orchestrator | 2025-09-16 01:00:44 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:44.013037 | orchestrator | 2025-09-16 01:00:44 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:47.051172 | orchestrator | 2025-09-16 01:00:47 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:47.051284 | orchestrator | 2025-09-16 01:00:47 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:47.052351 | orchestrator | 2025-09-16 01:00:47 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:47.053746 | orchestrator | 2025-09-16 01:00:47 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:47.053957 | orchestrator | 2025-09-16 01:00:47 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:50.096200 | orchestrator | 2025-09-16 01:00:50 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:50.097746 | orchestrator | 2025-09-16 01:00:50 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:50.099367 | orchestrator | 2025-09-16 01:00:50 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:50.100612 | orchestrator | 2025-09-16 01:00:50 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:50.100758 | orchestrator | 2025-09-16 01:00:50 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:53.132614 | orchestrator | 2025-09-16 01:00:53 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:53.132710 | orchestrator | 2025-09-16 01:00:53 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:53.133379 | orchestrator | 2025-09-16 01:00:53 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:53.134246 | orchestrator | 2025-09-16 01:00:53 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:53.134269 | orchestrator | 2025-09-16 01:00:53 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:56.168705 | orchestrator | 2025-09-16 01:00:56 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:56.169566 | orchestrator | 2025-09-16 01:00:56 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:56.171590 | orchestrator | 2025-09-16 01:00:56 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:56.172794 | orchestrator | 2025-09-16 01:00:56 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:56.172825 | orchestrator | 2025-09-16 01:00:56 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:00:59.212118 | orchestrator | 2025-09-16 01:00:59 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:00:59.212215 | orchestrator | 2025-09-16 01:00:59 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:00:59.213997 | orchestrator | 2025-09-16 01:00:59 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:00:59.214137 | orchestrator | 2025-09-16 01:00:59 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:00:59.214426 | orchestrator | 2025-09-16 01:00:59 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:02.253543 | orchestrator | 2025-09-16 01:01:02 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:01:02.254507 | orchestrator | 2025-09-16 01:01:02 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:02.255912 | orchestrator | 2025-09-16 01:01:02 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:01:02.257280 | orchestrator | 2025-09-16 01:01:02 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:02.257301 | orchestrator | 2025-09-16 01:01:02 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:05.302271 | orchestrator | 2025-09-16 01:01:05 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:01:05.306737 | orchestrator | 2025-09-16 01:01:05 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:05.308329 | orchestrator | 2025-09-16 01:01:05 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:01:05.309800 | orchestrator | 2025-09-16 01:01:05 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:05.309991 | orchestrator | 2025-09-16 01:01:05 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:08.357016 | orchestrator | 2025-09-16 01:01:08 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:01:08.360478 | orchestrator | 2025-09-16 01:01:08 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:08.362510 | orchestrator | 2025-09-16 01:01:08 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:01:08.363923 | orchestrator | 2025-09-16 01:01:08 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:08.363946 | orchestrator | 2025-09-16 01:01:08 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:11.407800 | orchestrator | 2025-09-16 01:01:11 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:01:11.409273 | orchestrator | 2025-09-16 01:01:11 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:11.411241 | orchestrator | 2025-09-16 01:01:11 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:01:11.413125 | orchestrator | 2025-09-16 01:01:11 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:11.413475 | orchestrator | 2025-09-16 01:01:11 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:14.439620 | orchestrator | 2025-09-16 01:01:14 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:01:14.439722 | orchestrator | 2025-09-16 01:01:14 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:14.440328 | orchestrator | 2025-09-16 01:01:14 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:01:14.441406 | orchestrator | 2025-09-16 01:01:14 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:14.441680 | orchestrator | 2025-09-16 01:01:14 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:17.475612 | orchestrator | 2025-09-16 01:01:17 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:01:17.477217 | orchestrator | 2025-09-16 01:01:17 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:17.479988 | orchestrator | 2025-09-16 01:01:17 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:01:17.481083 | orchestrator | 2025-09-16 01:01:17 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:17.481541 | orchestrator | 2025-09-16 01:01:17 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:20.525272 | orchestrator | 2025-09-16 01:01:20 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:01:20.525613 | orchestrator | 2025-09-16 01:01:20 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:20.526228 | orchestrator | 2025-09-16 01:01:20 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:01:20.526912 | orchestrator | 2025-09-16 01:01:20 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:20.527132 | orchestrator | 2025-09-16 01:01:20 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:23.549623 | orchestrator | 2025-09-16 01:01:23 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:01:23.551292 | orchestrator | 2025-09-16 01:01:23 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:23.552007 | orchestrator | 2025-09-16 01:01:23 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:01:23.553196 | orchestrator | 2025-09-16 01:01:23 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:23.553268 | orchestrator | 2025-09-16 01:01:23 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:26.580282 | orchestrator | 2025-09-16 01:01:26 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:01:26.580383 | orchestrator | 2025-09-16 01:01:26 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:26.580851 | orchestrator | 2025-09-16 01:01:26 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:01:26.581255 | orchestrator | 2025-09-16 01:01:26 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:26.581288 | orchestrator | 2025-09-16 01:01:26 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:29.605468 | orchestrator | 2025-09-16 01:01:29 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:01:29.605694 | orchestrator | 2025-09-16 01:01:29 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:29.606342 | orchestrator | 2025-09-16 01:01:29 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:01:29.607079 | orchestrator | 2025-09-16 01:01:29 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:29.607103 | orchestrator | 2025-09-16 01:01:29 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:32.645066 | orchestrator | 2025-09-16 01:01:32 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:01:32.647621 | orchestrator | 2025-09-16 01:01:32 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:32.649417 | orchestrator | 2025-09-16 01:01:32 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state STARTED 2025-09-16 01:01:32.651320 | orchestrator | 2025-09-16 01:01:32 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:32.651683 | orchestrator | 2025-09-16 01:01:32 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:35.697818 | orchestrator | 2025-09-16 01:01:35 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:01:35.699342 | orchestrator | 2025-09-16 01:01:35 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:35.703902 | orchestrator | 2025-09-16 01:01:35 | INFO  | Task be9bbf2d-bf9d-4ff7-87a9-96197493311b is in state SUCCESS 2025-09-16 01:01:35.705894 | orchestrator | 2025-09-16 01:01:35.705929 | orchestrator | 2025-09-16 01:01:35.705943 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 01:01:35.705958 | orchestrator | 2025-09-16 01:01:35.705970 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 01:01:35.705982 | orchestrator | Tuesday 16 September 2025 00:58:43 +0000 (0:00:00.266) 0:00:00.266 ***** 2025-09-16 01:01:35.705993 | orchestrator | ok: [testbed-manager] 2025-09-16 01:01:35.706005 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:01:35.706279 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:01:35.706295 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:01:35.706306 | orchestrator | ok: [testbed-node-3] 2025-09-16 01:01:35.706317 | orchestrator | ok: [testbed-node-4] 2025-09-16 01:01:35.706328 | orchestrator | ok: [testbed-node-5] 2025-09-16 01:01:35.706339 | orchestrator | 2025-09-16 01:01:35.706350 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 01:01:35.706361 | orchestrator | Tuesday 16 September 2025 00:58:44 +0000 (0:00:00.745) 0:00:01.012 ***** 2025-09-16 01:01:35.706373 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-16 01:01:35.706384 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-16 01:01:35.706395 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-16 01:01:35.706406 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-16 01:01:35.706417 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-16 01:01:35.706428 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-16 01:01:35.706439 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-16 01:01:35.706450 | orchestrator | 2025-09-16 01:01:35.706461 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-16 01:01:35.706472 | orchestrator | 2025-09-16 01:01:35.706483 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-16 01:01:35.706507 | orchestrator | Tuesday 16 September 2025 00:58:45 +0000 (0:00:00.672) 0:00:01.684 ***** 2025-09-16 01:01:35.706519 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 01:01:35.706532 | orchestrator | 2025-09-16 01:01:35.706543 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-16 01:01:35.706554 | orchestrator | Tuesday 16 September 2025 00:58:46 +0000 (0:00:01.440) 0:00:03.125 ***** 2025-09-16 01:01:35.706569 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-16 01:01:35.706585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.706619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.706631 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.706656 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.706668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.706685 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.706698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.706710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.706761 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.706775 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.706787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.706806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.706818 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.706836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.706849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.706861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.706925 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.706940 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.706951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.706972 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-16 01:01:35.706993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.707006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.707025 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.707036 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.707048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.707059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.707077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.707089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.707100 | orchestrator | 2025-09-16 01:01:35.707133 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-16 01:01:35.707145 | orchestrator | Tuesday 16 September 2025 00:58:49 +0000 (0:00:03.403) 0:00:06.528 ***** 2025-09-16 01:01:35.707157 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 01:01:35.707168 | orchestrator | 2025-09-16 01:01:35.707190 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-16 01:01:35.707202 | orchestrator | Tuesday 16 September 2025 00:58:51 +0000 (0:00:01.363) 0:00:07.892 ***** 2025-09-16 01:01:35.707213 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-16 01:01:35.707232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.707243 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.707255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.707273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.707285 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.707296 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.707313 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.707331 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.707342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.707354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.707365 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.707385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.707397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.707409 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.707431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.707443 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-16 01:01:35.707456 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.707467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.707485 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.707497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.707508 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.707530 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.707542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.707554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.707565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.707576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.708411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.708440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.708463 | orchestrator | 2025-09-16 01:01:35.708475 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-16 01:01:35.708486 | orchestrator | Tuesday 16 September 2025 00:58:57 +0000 (0:00:06.497) 0:00:14.390 ***** 2025-09-16 01:01:35.708505 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-16 01:01:35.708518 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 01:01:35.708529 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.708541 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-16 01:01:35.708563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 01:01:35.708575 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.708594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.708611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.708623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.708635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.708646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 01:01:35.708658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.708675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.708687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.708817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.708831 | orchestrator | skipping: [testbed-manager] 2025-09-16 01:01:35.708843 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:35.708854 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:35.708871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 01:01:35.708883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.708894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.708906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.708917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.708928 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:35.708947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 01:01:35.708966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.708987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.708999 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:01:35.709013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 01:01:35.709027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.709040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.709053 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:01:35.709066 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 01:01:35.709079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.709107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.709175 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:01:35.709189 | orchestrator | 2025-09-16 01:01:35.709202 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-16 01:01:35.709216 | orchestrator | Tuesday 16 September 2025 00:58:59 +0000 (0:00:01.232) 0:00:15.623 ***** 2025-09-16 01:01:35.709235 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-16 01:01:35.709250 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 01:01:35.709263 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.709277 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-16 01:01:35.709299 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.709318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 01:01:35.709332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.709351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.709366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.709379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.709390 | orchestrator | skipping: [testbed-manager] 2025-09-16 01:01:35.709401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 01:01:35.709413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.709431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.709448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.709460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.709471 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:35.709482 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:35.709498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 01:01:35.709510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.709521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.709532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.709550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-16 01:01:35.709562 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:35.709579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 01:01:35.709591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.709602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.709614 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:01:35.709629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 01:01:35.709641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.709653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.709670 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:01:35.709682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-16 01:01:35.709693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.709711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-16 01:01:35.709723 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:01:35.709734 | orchestrator | 2025-09-16 01:01:35.709745 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-16 01:01:35.709756 | orchestrator | Tuesday 16 September 2025 00:59:00 +0000 (0:00:01.624) 0:00:17.247 ***** 2025-09-16 01:01:35.709768 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-16 01:01:35.709784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.709795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.709807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.709829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.709840 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.709858 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.709869 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.709881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.709897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.709908 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.709926 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.709938 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.709949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.709966 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.709978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.709990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.710006 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-16 01:01:35.710074 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.710087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.710099 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.710137 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.710149 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.710161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.710177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.710196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.710208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.710219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.710231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.710242 | orchestrator | 2025-09-16 01:01:35.710253 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-16 01:01:35.710264 | orchestrator | Tuesday 16 September 2025 00:59:06 +0000 (0:00:05.907) 0:00:23.154 ***** 2025-09-16 01:01:35.710276 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-16 01:01:35.710287 | orchestrator | 2025-09-16 01:01:35.710298 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-16 01:01:35.710314 | orchestrator | Tuesday 16 September 2025 00:59:07 +0000 (0:00:00.887) 0:00:24.042 ***** 2025-09-16 01:01:35.710326 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1058241, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4278955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710338 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1058241, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4278955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710355 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1058257, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4327197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710373 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1058257, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4327197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710385 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1058241, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4278955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710396 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1058241, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4278955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710414 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1058239, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4268954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710425 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1058241, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4278955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710437 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1058241, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4278955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710458 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1058239, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4268954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710470 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1058251, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4308956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710482 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1058257, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4327197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710493 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1058237, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4258955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710510 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1058251, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4308956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710522 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1058241, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4278955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.710533 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1058257, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4327197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710556 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1058257, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4327197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710568 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1058243, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4289923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710579 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1058257, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4327197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710590 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1058239, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4268954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710621 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1058237, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4258955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710633 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1058239, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4268954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710645 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1058250, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4308956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710667 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1058239, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4268954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710678 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1058239, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4268954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710690 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1058243, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4289923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710701 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1058251, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4308956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710713 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1058257, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4327197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.710740 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1058244, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.42931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710759 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1058251, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4308956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710775 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1058251, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4308956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710787 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1058250, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4308956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710798 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1058251, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4308956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710809 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1058237, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4258955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710821 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1058240, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4268954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710839 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1058237, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4258955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710857 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1058237, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4258955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710873 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1058237, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4258955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710885 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1058244, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.42931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710896 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1058243, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4289923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710907 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1058243, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4289923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710919 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1058239, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4268954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.710936 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058256, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4324572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710962 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1058250, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4308956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710979 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1058243, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4289923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.710990 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1058240, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4268954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711002 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1058243, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4289923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711013 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1058250, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4308956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711025 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058234, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4248955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711041 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1058244, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.42931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711060 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1058250, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4308956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711072 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058256, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4324572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711084 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1058250, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4308956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711095 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1058251, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4308956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.711106 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058234, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4248955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711137 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1058265, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4338956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711163 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1058240, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4268954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711211 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1058244, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.42931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711229 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1058244, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.42931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711240 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1058244, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.42931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711252 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1058240, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4268954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711263 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1058240, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4268954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711274 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1058254, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4319198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711300 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1058265, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4338956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711311 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058256, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4324572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711328 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1058240, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4268954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711339 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058234, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4248955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711351 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1058237, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4258955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.711362 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058256, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4324572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711373 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058238, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4265614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711399 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058256, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4324572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711411 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1058254, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4319198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711427 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058256, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4324572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711439 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1058265, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4338956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711450 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058234, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4248955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711461 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1058254, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4319198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711479 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1058235, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4256341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711497 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058238, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4265614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711508 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058234, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4248955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711524 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058234, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4248955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711536 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058238, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4265614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711547 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1058247, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4301178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711559 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1058265, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4338956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711576 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1058265, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4338956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711595 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1058265, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4338956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711607 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1058235, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4256341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711623 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1058243, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4289923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.711634 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1058254, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4319198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711646 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1058254, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4319198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711657 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1058235, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4256341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711676 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1058245, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.429664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711693 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1058254, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4319198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711704 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058238, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4265614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711724 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1058247, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4301178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711735 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058238, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4265614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711747 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1058247, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4301178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711758 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1058264, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4337654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711776 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:35.711787 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058238, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4265614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711820 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1058235, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4256341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711832 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1058245, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.429664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711849 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1058235, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4256341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711860 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1058264, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4337654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711872 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:35.711883 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1058247, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4301178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711901 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1058250, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4308956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.711912 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1058245, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.429664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711929 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1058235, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4256341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711941 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1058245, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.429664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711957 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1058264, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4337654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711968 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:01:35.711980 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1058247, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4301178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.711991 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1058264, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4337654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.712008 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:01:35.712020 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1058247, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4301178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.712031 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1058245, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.429664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.712049 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1058245, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.429664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.712061 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1058264, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4337654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.712072 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:01:35.712089 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1058244, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.42931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.712100 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1058264, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4337654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-16 01:01:35.712175 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:35.712188 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1058240, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4268954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.712199 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058256, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4324572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.712211 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058234, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4248955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.712229 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1058265, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4338956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.712240 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1058254, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4319198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.712257 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1058238, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4265614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.712269 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1058235, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4256341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.712287 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1058247, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4301178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.712298 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1058245, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.429664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.712309 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1058264, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4337654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-16 01:01:35.712320 | orchestrator | 2025-09-16 01:01:35.712331 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-16 01:01:35.712343 | orchestrator | Tuesday 16 September 2025 00:59:29 +0000 (0:00:21.776) 0:00:45.818 ***** 2025-09-16 01:01:35.712354 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-16 01:01:35.712365 | orchestrator | 2025-09-16 01:01:35.712381 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-16 01:01:35.712393 | orchestrator | Tuesday 16 September 2025 00:59:29 +0000 (0:00:00.637) 0:00:46.455 ***** 2025-09-16 01:01:35.712404 | orchestrator | [WARNING]: Skipped 2025-09-16 01:01:35.712416 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-16 01:01:35.712427 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-16 01:01:35.712438 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-16 01:01:35.712449 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-16 01:01:35.712460 | orchestrator | [WARNING]: Skipped 2025-09-16 01:01:35.712471 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-16 01:01:35.712482 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-16 01:01:35.712493 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-16 01:01:35.712503 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-16 01:01:35.712514 | orchestrator | [WARNING]: Skipped 2025-09-16 01:01:35.712525 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-16 01:01:35.712536 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-16 01:01:35.712547 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-16 01:01:35.712564 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-16 01:01:35.712575 | orchestrator | [WARNING]: Skipped 2025-09-16 01:01:35.712586 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-16 01:01:35.712597 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-16 01:01:35.712613 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-16 01:01:35.712623 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-16 01:01:35.712633 | orchestrator | [WARNING]: Skipped 2025-09-16 01:01:35.712642 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-16 01:01:35.712652 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-16 01:01:35.712662 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-16 01:01:35.712671 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-16 01:01:35.712681 | orchestrator | [WARNING]: Skipped 2025-09-16 01:01:35.712691 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-16 01:01:35.712700 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-16 01:01:35.712710 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-16 01:01:35.712719 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-16 01:01:35.712729 | orchestrator | [WARNING]: Skipped 2025-09-16 01:01:35.712739 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-16 01:01:35.712748 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-16 01:01:35.712758 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-16 01:01:35.712768 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-16 01:01:35.712777 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-16 01:01:35.712787 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-16 01:01:35.712797 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-16 01:01:35.712806 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-16 01:01:35.712816 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-16 01:01:35.712826 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-16 01:01:35.712835 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-16 01:01:35.712845 | orchestrator | 2025-09-16 01:01:35.712854 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-16 01:01:35.712864 | orchestrator | Tuesday 16 September 2025 00:59:32 +0000 (0:00:02.424) 0:00:48.879 ***** 2025-09-16 01:01:35.712874 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-16 01:01:35.712884 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:35.712894 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-16 01:01:35.712904 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:35.712913 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-16 01:01:35.712923 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:35.712933 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-16 01:01:35.712942 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:01:35.712952 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-16 01:01:35.712961 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:01:35.712971 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-16 01:01:35.712981 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:01:35.712991 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-16 01:01:35.713006 | orchestrator | 2025-09-16 01:01:35.713015 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-16 01:01:35.713025 | orchestrator | Tuesday 16 September 2025 00:59:46 +0000 (0:00:14.096) 0:01:02.976 ***** 2025-09-16 01:01:35.713040 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) [02025-09-16 01:01:35 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:35.713051 | orchestrator | 2025-09-16 01:01:35 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:35.713132 | orchestrator | m 2025-09-16 01:01:35.713145 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:35.713155 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-16 01:01:35.713165 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:35.713174 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-16 01:01:35.713184 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:01:35.713193 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-16 01:01:35.713203 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:35.713212 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-16 01:01:35.713222 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:01:35.713232 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-16 01:01:35.713241 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:01:35.713251 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-16 01:01:35.713260 | orchestrator | 2025-09-16 01:01:35.713270 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-16 01:01:35.713280 | orchestrator | Tuesday 16 September 2025 00:59:50 +0000 (0:00:04.398) 0:01:07.375 ***** 2025-09-16 01:01:35.713294 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-16 01:01:35.713305 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-16 01:01:35.713315 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-16 01:01:35.713324 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:35.713334 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:35.713343 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:35.713353 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-16 01:01:35.713363 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:01:35.713372 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-16 01:01:35.713382 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:01:35.713392 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-16 01:01:35.713402 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-16 01:01:35.713411 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:01:35.713421 | orchestrator | 2025-09-16 01:01:35.713430 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-16 01:01:35.713440 | orchestrator | Tuesday 16 September 2025 00:59:52 +0000 (0:00:02.176) 0:01:09.551 ***** 2025-09-16 01:01:35.713450 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-16 01:01:35.713459 | orchestrator | 2025-09-16 01:01:35.713469 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-16 01:01:35.713485 | orchestrator | Tuesday 16 September 2025 00:59:53 +0000 (0:00:00.954) 0:01:10.506 ***** 2025-09-16 01:01:35.713495 | orchestrator | skipping: [testbed-manager] 2025-09-16 01:01:35.713505 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:35.713514 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:35.713524 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:35.713533 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:01:35.713542 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:01:35.713552 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:01:35.713561 | orchestrator | 2025-09-16 01:01:35.713571 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-16 01:01:35.713580 | orchestrator | Tuesday 16 September 2025 00:59:54 +0000 (0:00:00.618) 0:01:11.125 ***** 2025-09-16 01:01:35.713590 | orchestrator | skipping: [testbed-manager] 2025-09-16 01:01:35.713599 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:01:35.713609 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:01:35.713618 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:01:35.713628 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:01:35.713637 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:01:35.713647 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:01:35.713656 | orchestrator | 2025-09-16 01:01:35.713666 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-16 01:01:35.713675 | orchestrator | Tuesday 16 September 2025 00:59:57 +0000 (0:00:02.691) 0:01:13.816 ***** 2025-09-16 01:01:35.713685 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-16 01:01:35.713695 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:35.713704 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-16 01:01:35.713714 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:35.713723 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-16 01:01:35.713733 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-16 01:01:35.713743 | orchestrator | skipping: [testbed-manager] 2025-09-16 01:01:35.713753 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:35.713767 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-16 01:01:35.713777 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:01:35.713787 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-16 01:01:35.713796 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:01:35.713806 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-16 01:01:35.713816 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:01:35.713825 | orchestrator | 2025-09-16 01:01:35.713835 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-16 01:01:35.713845 | orchestrator | Tuesday 16 September 2025 00:59:58 +0000 (0:00:01.731) 0:01:15.548 ***** 2025-09-16 01:01:35.713855 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-16 01:01:35.713865 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:35.713874 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-16 01:01:35.713884 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:35.713894 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-16 01:01:35.713903 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:35.713913 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-16 01:01:35.713932 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-16 01:01:35.713948 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:01:35.713958 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-16 01:01:35.713968 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:01:35.713977 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-16 01:01:35.713987 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:01:35.713996 | orchestrator | 2025-09-16 01:01:35.714006 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-16 01:01:35.714054 | orchestrator | Tuesday 16 September 2025 01:00:00 +0000 (0:00:01.249) 0:01:16.797 ***** 2025-09-16 01:01:35.714065 | orchestrator | [WARNING]: Skipped 2025-09-16 01:01:35.714075 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-16 01:01:35.714085 | orchestrator | due to this access issue: 2025-09-16 01:01:35.714095 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-16 01:01:35.714105 | orchestrator | not a directory 2025-09-16 01:01:35.714132 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-16 01:01:35.714142 | orchestrator | 2025-09-16 01:01:35.714152 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-16 01:01:35.714162 | orchestrator | Tuesday 16 September 2025 01:00:01 +0000 (0:00:00.998) 0:01:17.796 ***** 2025-09-16 01:01:35.714172 | orchestrator | skipping: [testbed-manager] 2025-09-16 01:01:35.714182 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:35.714191 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:35.714201 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:35.714211 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:01:35.714220 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:01:35.714230 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:01:35.714240 | orchestrator | 2025-09-16 01:01:35.714250 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-16 01:01:35.714259 | orchestrator | Tuesday 16 September 2025 01:00:02 +0000 (0:00:01.047) 0:01:18.844 ***** 2025-09-16 01:01:35.714269 | orchestrator | skipping: [testbed-manager] 2025-09-16 01:01:35.714279 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:35.714288 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:35.714298 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:35.714308 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:01:35.714317 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:01:35.714327 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:01:35.714337 | orchestrator | 2025-09-16 01:01:35.714346 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-16 01:01:35.714356 | orchestrator | Tuesday 16 September 2025 01:00:02 +0000 (0:00:00.732) 0:01:19.576 ***** 2025-09-16 01:01:35.714367 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-16 01:01:35.714385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.714402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.714417 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.714427 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.714437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.714447 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.714458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.714469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.714484 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.714500 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.714515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-16 01:01:35.714526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.714537 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-16 01:01:35.714548 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.714559 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.714579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.714590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.714604 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.714614 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.714624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.714635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.714645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.714655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.714675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.714685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-16 01:01:35.714700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.714710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.714720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-16 01:01:35.714730 | orchestrator | 2025-09-16 01:01:35.714740 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-16 01:01:35.714750 | orchestrator | Tuesday 16 September 2025 01:00:07 +0000 (0:00:04.807) 0:01:24.384 ***** 2025-09-16 01:01:35.714760 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-16 01:01:35.714770 | orchestrator | skipping: [testbed-manager] 2025-09-16 01:01:35.714780 | orchestrator | 2025-09-16 01:01:35.714789 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-16 01:01:35.714799 | orchestrator | Tuesday 16 September 2025 01:00:09 +0000 (0:00:01.665) 0:01:26.049 ***** 2025-09-16 01:01:35.714809 | orchestrator | 2025-09-16 01:01:35.714818 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-16 01:01:35.714828 | orchestrator | Tuesday 16 September 2025 01:00:09 +0000 (0:00:00.062) 0:01:26.112 ***** 2025-09-16 01:01:35.714837 | orchestrator | 2025-09-16 01:01:35.714852 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-16 01:01:35.714862 | orchestrator | Tuesday 16 September 2025 01:00:09 +0000 (0:00:00.066) 0:01:26.179 ***** 2025-09-16 01:01:35.714871 | orchestrator | 2025-09-16 01:01:35.714881 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-16 01:01:35.714890 | orchestrator | Tuesday 16 September 2025 01:00:09 +0000 (0:00:00.060) 0:01:26.239 ***** 2025-09-16 01:01:35.714900 | orchestrator | 2025-09-16 01:01:35.714909 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-16 01:01:35.714919 | orchestrator | Tuesday 16 September 2025 01:00:09 +0000 (0:00:00.180) 0:01:26.420 ***** 2025-09-16 01:01:35.714929 | orchestrator | 2025-09-16 01:01:35.714938 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-16 01:01:35.714948 | orchestrator | Tuesday 16 September 2025 01:00:09 +0000 (0:00:00.081) 0:01:26.501 ***** 2025-09-16 01:01:35.714958 | orchestrator | 2025-09-16 01:01:35.714967 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-16 01:01:35.714977 | orchestrator | Tuesday 16 September 2025 01:00:10 +0000 (0:00:00.124) 0:01:26.626 ***** 2025-09-16 01:01:35.714986 | orchestrator | 2025-09-16 01:01:35.714996 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-16 01:01:35.715005 | orchestrator | Tuesday 16 September 2025 01:00:10 +0000 (0:00:00.100) 0:01:26.727 ***** 2025-09-16 01:01:35.715015 | orchestrator | changed: [testbed-manager] 2025-09-16 01:01:35.715025 | orchestrator | 2025-09-16 01:01:35.715034 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-16 01:01:35.715048 | orchestrator | Tuesday 16 September 2025 01:00:23 +0000 (0:00:13.787) 0:01:40.515 ***** 2025-09-16 01:01:35.715058 | orchestrator | changed: [testbed-manager] 2025-09-16 01:01:35.715068 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:01:35.715077 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:01:35.715087 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:01:35.715097 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:01:35.715106 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:01:35.715159 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:01:35.715169 | orchestrator | 2025-09-16 01:01:35.715179 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-16 01:01:35.715189 | orchestrator | Tuesday 16 September 2025 01:00:36 +0000 (0:00:12.482) 0:01:52.998 ***** 2025-09-16 01:01:35.715199 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:01:35.715208 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:01:35.715218 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:01:35.715227 | orchestrator | 2025-09-16 01:01:35.715237 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-16 01:01:35.715247 | orchestrator | Tuesday 16 September 2025 01:00:46 +0000 (0:00:09.624) 0:02:02.622 ***** 2025-09-16 01:01:35.715256 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:01:35.715265 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:01:35.715273 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:01:35.715281 | orchestrator | 2025-09-16 01:01:35.715289 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-16 01:01:35.715296 | orchestrator | Tuesday 16 September 2025 01:00:51 +0000 (0:00:05.230) 0:02:07.853 ***** 2025-09-16 01:01:35.715304 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:01:35.715312 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:01:35.715324 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:01:35.715332 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:01:35.715340 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:01:35.715348 | orchestrator | changed: [testbed-manager] 2025-09-16 01:01:35.715356 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:01:35.715363 | orchestrator | 2025-09-16 01:01:35.715372 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-16 01:01:35.715379 | orchestrator | Tuesday 16 September 2025 01:01:04 +0000 (0:00:13.551) 0:02:21.405 ***** 2025-09-16 01:01:35.715393 | orchestrator | changed: [testbed-manager] 2025-09-16 01:01:35.715401 | orchestrator | 2025-09-16 01:01:35.715408 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-16 01:01:35.715417 | orchestrator | Tuesday 16 September 2025 01:01:11 +0000 (0:00:06.839) 0:02:28.244 ***** 2025-09-16 01:01:35.715424 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:01:35.715432 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:01:35.715440 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:01:35.715448 | orchestrator | 2025-09-16 01:01:35.715456 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-16 01:01:35.715463 | orchestrator | Tuesday 16 September 2025 01:01:21 +0000 (0:00:09.660) 0:02:37.905 ***** 2025-09-16 01:01:35.715471 | orchestrator | changed: [testbed-manager] 2025-09-16 01:01:35.715479 | orchestrator | 2025-09-16 01:01:35.715487 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-16 01:01:35.715495 | orchestrator | Tuesday 16 September 2025 01:01:30 +0000 (0:00:09.031) 0:02:46.936 ***** 2025-09-16 01:01:35.715503 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:01:35.715510 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:01:35.715518 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:01:35.715526 | orchestrator | 2025-09-16 01:01:35.715534 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 01:01:35.715542 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-16 01:01:35.715550 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-16 01:01:35.715559 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-16 01:01:35.715567 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-16 01:01:35.715575 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-16 01:01:35.715583 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-16 01:01:35.715591 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-16 01:01:35.715598 | orchestrator | 2025-09-16 01:01:35.715606 | orchestrator | 2025-09-16 01:01:35.715614 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 01:01:35.715622 | orchestrator | Tuesday 16 September 2025 01:01:35 +0000 (0:00:04.858) 0:02:51.795 ***** 2025-09-16 01:01:35.715630 | orchestrator | =============================================================================== 2025-09-16 01:01:35.715638 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 21.78s 2025-09-16 01:01:35.715646 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.10s 2025-09-16 01:01:35.715654 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 13.79s 2025-09-16 01:01:35.715662 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.55s 2025-09-16 01:01:35.715669 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.48s 2025-09-16 01:01:35.715682 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.66s 2025-09-16 01:01:35.715690 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.62s 2025-09-16 01:01:35.715698 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 9.03s 2025-09-16 01:01:35.715706 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.84s 2025-09-16 01:01:35.715718 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.50s 2025-09-16 01:01:35.715726 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.91s 2025-09-16 01:01:35.715734 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.23s 2025-09-16 01:01:35.715742 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 4.86s 2025-09-16 01:01:35.715750 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.81s 2025-09-16 01:01:35.715757 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.40s 2025-09-16 01:01:35.715765 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.40s 2025-09-16 01:01:35.715773 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.69s 2025-09-16 01:01:35.715781 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.42s 2025-09-16 01:01:35.715792 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.18s 2025-09-16 01:01:35.715800 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.73s 2025-09-16 01:01:38.741657 | orchestrator | 2025-09-16 01:01:38 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state STARTED 2025-09-16 01:01:38.742002 | orchestrator | 2025-09-16 01:01:38 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:38.743582 | orchestrator | 2025-09-16 01:01:38 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:38.744580 | orchestrator | 2025-09-16 01:01:38 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:01:38.744604 | orchestrator | 2025-09-16 01:01:38 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:41.784620 | orchestrator | 2025-09-16 01:01:41 | INFO  | Task f70760e5-4ce6-4a56-bc44-f6172412e85a is in state SUCCESS 2025-09-16 01:01:41.786220 | orchestrator | 2025-09-16 01:01:41.786306 | orchestrator | 2025-09-16 01:01:41.786324 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 01:01:41.786337 | orchestrator | 2025-09-16 01:01:41.786349 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 01:01:41.786360 | orchestrator | Tuesday 16 September 2025 00:58:50 +0000 (0:00:00.295) 0:00:00.295 ***** 2025-09-16 01:01:41.786371 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:01:41.786383 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:01:41.786394 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:01:41.786405 | orchestrator | 2025-09-16 01:01:41.786416 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 01:01:41.786427 | orchestrator | Tuesday 16 September 2025 00:58:50 +0000 (0:00:00.279) 0:00:00.575 ***** 2025-09-16 01:01:41.786438 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-16 01:01:41.786450 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-16 01:01:41.786461 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-16 01:01:41.786472 | orchestrator | 2025-09-16 01:01:41.786483 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-16 01:01:41.786494 | orchestrator | 2025-09-16 01:01:41.786505 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-16 01:01:41.786515 | orchestrator | Tuesday 16 September 2025 00:58:51 +0000 (0:00:00.443) 0:00:01.018 ***** 2025-09-16 01:01:41.786526 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:01:41.786538 | orchestrator | 2025-09-16 01:01:41.786549 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-16 01:01:41.786559 | orchestrator | Tuesday 16 September 2025 00:58:51 +0000 (0:00:00.698) 0:00:01.717 ***** 2025-09-16 01:01:41.786570 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-16 01:01:41.786611 | orchestrator | 2025-09-16 01:01:41.786623 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-16 01:01:41.786634 | orchestrator | Tuesday 16 September 2025 00:59:04 +0000 (0:00:12.260) 0:00:13.977 ***** 2025-09-16 01:01:41.786644 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-16 01:01:41.786655 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-16 01:01:41.786666 | orchestrator | 2025-09-16 01:01:41.786677 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-16 01:01:41.786688 | orchestrator | Tuesday 16 September 2025 00:59:10 +0000 (0:00:06.761) 0:00:20.738 ***** 2025-09-16 01:01:41.786699 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-16 01:01:41.786710 | orchestrator | 2025-09-16 01:01:41.786720 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-16 01:01:41.786731 | orchestrator | Tuesday 16 September 2025 00:59:14 +0000 (0:00:03.431) 0:00:24.170 ***** 2025-09-16 01:01:41.786742 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-16 01:01:41.786753 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-16 01:01:41.786764 | orchestrator | 2025-09-16 01:01:41.786774 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-16 01:01:41.786785 | orchestrator | Tuesday 16 September 2025 00:59:18 +0000 (0:00:04.225) 0:00:28.395 ***** 2025-09-16 01:01:41.786796 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-16 01:01:41.786807 | orchestrator | 2025-09-16 01:01:41.786817 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-16 01:01:41.786828 | orchestrator | Tuesday 16 September 2025 00:59:22 +0000 (0:00:03.396) 0:00:31.792 ***** 2025-09-16 01:01:41.786839 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-16 01:01:41.786850 | orchestrator | 2025-09-16 01:01:41.786861 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-16 01:01:41.786872 | orchestrator | Tuesday 16 September 2025 00:59:26 +0000 (0:00:04.602) 0:00:36.394 ***** 2025-09-16 01:01:41.786924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-16 01:01:41.786943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-16 01:01:41.786969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-16 01:01:41.786983 | orchestrator | 2025-09-16 01:01:41.786994 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-16 01:01:41.787006 | orchestrator | Tuesday 16 September 2025 00:59:29 +0000 (0:00:03.023) 0:00:39.418 ***** 2025-09-16 01:01:41.787017 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:01:41.787028 | orchestrator | 2025-09-16 01:01:41.787045 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-16 01:01:41.787057 | orchestrator | Tuesday 16 September 2025 00:59:30 +0000 (0:00:00.617) 0:00:40.036 ***** 2025-09-16 01:01:41.787068 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:01:41.787079 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:01:41.787096 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:01:41.787107 | orchestrator | 2025-09-16 01:01:41.787140 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-16 01:01:41.787151 | orchestrator | Tuesday 16 September 2025 00:59:35 +0000 (0:00:05.229) 0:00:45.265 ***** 2025-09-16 01:01:41.787162 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-16 01:01:41.787173 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-16 01:01:41.787185 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-16 01:01:41.787195 | orchestrator | 2025-09-16 01:01:41.787206 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-16 01:01:41.787217 | orchestrator | Tuesday 16 September 2025 00:59:36 +0000 (0:00:01.363) 0:00:46.629 ***** 2025-09-16 01:01:41.787228 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-16 01:01:41.787239 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-16 01:01:41.787250 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-16 01:01:41.787261 | orchestrator | 2025-09-16 01:01:41.787272 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-16 01:01:41.787283 | orchestrator | Tuesday 16 September 2025 00:59:37 +0000 (0:00:01.025) 0:00:47.655 ***** 2025-09-16 01:01:41.787294 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:01:41.787305 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:01:41.787316 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:01:41.787326 | orchestrator | 2025-09-16 01:01:41.787337 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-16 01:01:41.787348 | orchestrator | Tuesday 16 September 2025 00:59:38 +0000 (0:00:00.578) 0:00:48.233 ***** 2025-09-16 01:01:41.787359 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:41.787370 | orchestrator | 2025-09-16 01:01:41.787381 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-16 01:01:41.787392 | orchestrator | Tuesday 16 September 2025 00:59:38 +0000 (0:00:00.235) 0:00:48.468 ***** 2025-09-16 01:01:41.787403 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:41.787414 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:41.787425 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:41.787435 | orchestrator | 2025-09-16 01:01:41.787446 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-16 01:01:41.787457 | orchestrator | Tuesday 16 September 2025 00:59:38 +0000 (0:00:00.243) 0:00:48.711 ***** 2025-09-16 01:01:41.787468 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:01:41.787479 | orchestrator | 2025-09-16 01:01:41.787490 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-16 01:01:41.787501 | orchestrator | Tuesday 16 September 2025 00:59:39 +0000 (0:00:00.469) 0:00:49.181 ***** 2025-09-16 01:01:41.787524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-16 01:01:41.787547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-16 01:01:41.787566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-16 01:01:41.787585 | orchestrator | 2025-09-16 01:01:41.787596 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-16 01:01:41.787607 | orchestrator | Tuesday 16 September 2025 00:59:43 +0000 (0:00:03.755) 0:00:52.937 ***** 2025-09-16 01:01:41.787627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-16 01:01:41.787640 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:41.787652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-16 01:01:41.787672 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:41.787697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-16 01:01:41.787710 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:41.787721 | orchestrator | 2025-09-16 01:01:41.787731 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-16 01:01:41.787742 | orchestrator | Tuesday 16 September 2025 00:59:45 +0000 (0:00:02.612) 0:00:55.549 ***** 2025-09-16 01:01:41.787754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-16 01:01:41.787766 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:41.787790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-16 01:01:41.787810 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:41.787822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-16 01:01:41.787834 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:41.787844 | orchestrator | 2025-09-16 01:01:41.787855 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-16 01:01:41.787866 | orchestrator | Tuesday 16 September 2025 00:59:50 +0000 (0:00:04.226) 0:00:59.776 ***** 2025-09-16 01:01:41.787877 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:41.787888 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:41.787899 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:41.787910 | orchestrator | 2025-09-16 01:01:41.787920 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-16 01:01:41.787931 | orchestrator | Tuesday 16 September 2025 00:59:54 +0000 (0:00:04.334) 0:01:04.110 ***** 2025-09-16 01:01:41.787966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-16 01:01:41.787980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-16 01:01:41.787997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-16 01:01:41.788017 | orchestrator | 2025-09-16 01:01:41.788028 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-16 01:01:41.788038 | orchestrator | Tuesday 16 September 2025 00:59:59 +0000 (0:00:05.531) 0:01:09.642 ***** 2025-09-16 01:01:41.788049 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:01:41.788060 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:01:41.788071 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:01:41.788081 | orchestrator | 2025-09-16 01:01:41.788092 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-16 01:01:41.788103 | orchestrator | Tuesday 16 September 2025 01:00:06 +0000 (0:00:06.473) 0:01:16.116 ***** 2025-09-16 01:01:41.788144 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:41.788156 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:41.788167 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:41.788177 | orchestrator | 2025-09-16 01:01:41.788188 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-16 01:01:41.788205 | orchestrator | Tuesday 16 September 2025 01:00:09 +0000 (0:00:03.249) 0:01:19.365 ***** 2025-09-16 01:01:41.788217 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:41.788228 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:41.788238 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:41.788249 | orchestrator | 2025-09-16 01:01:41.788260 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-16 01:01:41.788271 | orchestrator | Tuesday 16 September 2025 01:00:13 +0000 (0:00:03.613) 0:01:22.979 ***** 2025-09-16 01:01:41.788282 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:41.788292 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:41.788303 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:41.788314 | orchestrator | 2025-09-16 01:01:41.788324 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-16 01:01:41.788335 | orchestrator | Tuesday 16 September 2025 01:00:17 +0000 (0:00:04.064) 0:01:27.044 ***** 2025-09-16 01:01:41.788346 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:41.788357 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:41.788367 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:41.788378 | orchestrator | 2025-09-16 01:01:41.788389 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-16 01:01:41.788400 | orchestrator | Tuesday 16 September 2025 01:00:22 +0000 (0:00:05.029) 0:01:32.073 ***** 2025-09-16 01:01:41.788411 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:41.788422 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:41.788432 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:41.788443 | orchestrator | 2025-09-16 01:01:41.788454 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-16 01:01:41.788465 | orchestrator | Tuesday 16 September 2025 01:00:22 +0000 (0:00:00.252) 0:01:32.326 ***** 2025-09-16 01:01:41.788482 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-16 01:01:41.788493 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:41.788504 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-16 01:01:41.788515 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:41.788526 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-16 01:01:41.788536 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:41.788547 | orchestrator | 2025-09-16 01:01:41.788558 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-16 01:01:41.788569 | orchestrator | Tuesday 16 September 2025 01:00:31 +0000 (0:00:09.220) 0:01:41.546 ***** 2025-09-16 01:01:41.788586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-16 01:01:41.788608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-16 01:01:41.788627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-16 01:01:41.788640 | orchestrator | 2025-09-16 01:01:41.788650 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-16 01:01:41.788661 | orchestrator | Tuesday 16 September 2025 01:00:36 +0000 (0:00:04.398) 0:01:45.945 ***** 2025-09-16 01:01:41.788672 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:01:41.788683 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:01:41.788693 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:01:41.788704 | orchestrator | 2025-09-16 01:01:41.788715 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-16 01:01:41.788726 | orchestrator | Tuesday 16 September 2025 01:00:36 +0000 (0:00:00.267) 0:01:46.213 ***** 2025-09-16 01:01:41.788736 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:01:41.788747 | orchestrator | 2025-09-16 01:01:41.788763 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-16 01:01:41.788774 | orchestrator | Tuesday 16 September 2025 01:00:38 +0000 (0:00:01.998) 0:01:48.211 ***** 2025-09-16 01:01:41.788785 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:01:41.788795 | orchestrator | 2025-09-16 01:01:41.788806 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-16 01:01:41.788817 | orchestrator | Tuesday 16 September 2025 01:00:40 +0000 (0:00:02.078) 0:01:50.290 ***** 2025-09-16 01:01:41.788827 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:01:41.788838 | orchestrator | 2025-09-16 01:01:41.788848 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-16 01:01:41.788859 | orchestrator | Tuesday 16 September 2025 01:00:42 +0000 (0:00:02.001) 0:01:52.292 ***** 2025-09-16 01:01:41.788870 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:01:41.788880 | orchestrator | 2025-09-16 01:01:41.788891 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-16 01:01:41.788902 | orchestrator | Tuesday 16 September 2025 01:01:10 +0000 (0:00:28.029) 0:02:20.322 ***** 2025-09-16 01:01:41.788913 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:01:41.788923 | orchestrator | 2025-09-16 01:01:41.788940 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-16 01:01:41.788957 | orchestrator | Tuesday 16 September 2025 01:01:12 +0000 (0:00:01.998) 0:02:22.320 ***** 2025-09-16 01:01:41.788968 | orchestrator | 2025-09-16 01:01:41.788979 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-16 01:01:41.788990 | orchestrator | Tuesday 16 September 2025 01:01:12 +0000 (0:00:00.060) 0:02:22.381 ***** 2025-09-16 01:01:41.789000 | orchestrator | 2025-09-16 01:01:41.789011 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-16 01:01:41.789022 | orchestrator | Tuesday 16 September 2025 01:01:12 +0000 (0:00:00.061) 0:02:22.442 ***** 2025-09-16 01:01:41.789033 | orchestrator | 2025-09-16 01:01:41.789043 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-16 01:01:41.789054 | orchestrator | Tuesday 16 September 2025 01:01:12 +0000 (0:00:00.067) 0:02:22.510 ***** 2025-09-16 01:01:41.789065 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:01:41.789076 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:01:41.789086 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:01:41.789097 | orchestrator | 2025-09-16 01:01:41.789108 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 01:01:41.789181 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-16 01:01:41.789196 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-16 01:01:41.789207 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-16 01:01:41.789218 | orchestrator | 2025-09-16 01:01:41.789229 | orchestrator | 2025-09-16 01:01:41.789239 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 01:01:41.789250 | orchestrator | Tuesday 16 September 2025 01:01:39 +0000 (0:00:27.056) 0:02:49.566 ***** 2025-09-16 01:01:41.789261 | orchestrator | =============================================================================== 2025-09-16 01:01:41.789272 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.03s 2025-09-16 01:01:41.789283 | orchestrator | glance : Restart glance-api container ---------------------------------- 27.06s 2025-09-16 01:01:41.789293 | orchestrator | service-ks-register : glance | Creating services ----------------------- 12.26s 2025-09-16 01:01:41.789304 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 9.22s 2025-09-16 01:01:41.789315 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.76s 2025-09-16 01:01:41.789325 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.47s 2025-09-16 01:01:41.789336 | orchestrator | glance : Copying over config.json files for services -------------------- 5.53s 2025-09-16 01:01:41.789346 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 5.23s 2025-09-16 01:01:41.789356 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.03s 2025-09-16 01:01:41.789366 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.60s 2025-09-16 01:01:41.789375 | orchestrator | glance : Check glance containers ---------------------------------------- 4.40s 2025-09-16 01:01:41.789385 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.33s 2025-09-16 01:01:41.789394 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.23s 2025-09-16 01:01:41.789404 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.23s 2025-09-16 01:01:41.789413 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.06s 2025-09-16 01:01:41.789423 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.76s 2025-09-16 01:01:41.789432 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.61s 2025-09-16 01:01:41.789451 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.43s 2025-09-16 01:01:41.789460 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.40s 2025-09-16 01:01:41.789470 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.25s 2025-09-16 01:01:41.789480 | orchestrator | 2025-09-16 01:01:41 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:41.791961 | orchestrator | 2025-09-16 01:01:41 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:41.793321 | orchestrator | 2025-09-16 01:01:41 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:01:41.795374 | orchestrator | 2025-09-16 01:01:41 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:01:41.795687 | orchestrator | 2025-09-16 01:01:41 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:44.844004 | orchestrator | 2025-09-16 01:01:44 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:44.845207 | orchestrator | 2025-09-16 01:01:44 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:44.846400 | orchestrator | 2025-09-16 01:01:44 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:01:44.847588 | orchestrator | 2025-09-16 01:01:44 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:01:44.847876 | orchestrator | 2025-09-16 01:01:44 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:47.889869 | orchestrator | 2025-09-16 01:01:47 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:47.891843 | orchestrator | 2025-09-16 01:01:47 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:47.893423 | orchestrator | 2025-09-16 01:01:47 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:01:47.895000 | orchestrator | 2025-09-16 01:01:47 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:01:47.895185 | orchestrator | 2025-09-16 01:01:47 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:50.938829 | orchestrator | 2025-09-16 01:01:50 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:50.941020 | orchestrator | 2025-09-16 01:01:50 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:50.943301 | orchestrator | 2025-09-16 01:01:50 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:01:50.945354 | orchestrator | 2025-09-16 01:01:50 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:01:50.945383 | orchestrator | 2025-09-16 01:01:50 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:53.987388 | orchestrator | 2025-09-16 01:01:53 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:53.988842 | orchestrator | 2025-09-16 01:01:53 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:53.991071 | orchestrator | 2025-09-16 01:01:53 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:01:53.995210 | orchestrator | 2025-09-16 01:01:53 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:01:53.995700 | orchestrator | 2025-09-16 01:01:53 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:01:57.039103 | orchestrator | 2025-09-16 01:01:57 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:01:57.039810 | orchestrator | 2025-09-16 01:01:57 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:01:57.040641 | orchestrator | 2025-09-16 01:01:57 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:01:57.041610 | orchestrator | 2025-09-16 01:01:57 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:01:57.041643 | orchestrator | 2025-09-16 01:01:57 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:00.081008 | orchestrator | 2025-09-16 01:02:00 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:00.082241 | orchestrator | 2025-09-16 01:02:00 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:00.083995 | orchestrator | 2025-09-16 01:02:00 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:00.085682 | orchestrator | 2025-09-16 01:02:00 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:00.085704 | orchestrator | 2025-09-16 01:02:00 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:03.127006 | orchestrator | 2025-09-16 01:02:03 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:03.128710 | orchestrator | 2025-09-16 01:02:03 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:03.129680 | orchestrator | 2025-09-16 01:02:03 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:03.132330 | orchestrator | 2025-09-16 01:02:03 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:03.132353 | orchestrator | 2025-09-16 01:02:03 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:06.173674 | orchestrator | 2025-09-16 01:02:06 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:06.174470 | orchestrator | 2025-09-16 01:02:06 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:06.175524 | orchestrator | 2025-09-16 01:02:06 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:06.176611 | orchestrator | 2025-09-16 01:02:06 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:06.176635 | orchestrator | 2025-09-16 01:02:06 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:09.229384 | orchestrator | 2025-09-16 01:02:09 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:09.230837 | orchestrator | 2025-09-16 01:02:09 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:09.232056 | orchestrator | 2025-09-16 01:02:09 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:09.233383 | orchestrator | 2025-09-16 01:02:09 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:09.233493 | orchestrator | 2025-09-16 01:02:09 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:12.275423 | orchestrator | 2025-09-16 01:02:12 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:12.276296 | orchestrator | 2025-09-16 01:02:12 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:12.278467 | orchestrator | 2025-09-16 01:02:12 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:12.281371 | orchestrator | 2025-09-16 01:02:12 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:12.281410 | orchestrator | 2025-09-16 01:02:12 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:15.311301 | orchestrator | 2025-09-16 01:02:15 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:15.311563 | orchestrator | 2025-09-16 01:02:15 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:15.314111 | orchestrator | 2025-09-16 01:02:15 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:15.314695 | orchestrator | 2025-09-16 01:02:15 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:15.314754 | orchestrator | 2025-09-16 01:02:15 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:18.355692 | orchestrator | 2025-09-16 01:02:18 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:18.356277 | orchestrator | 2025-09-16 01:02:18 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:18.361718 | orchestrator | 2025-09-16 01:02:18 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:18.365925 | orchestrator | 2025-09-16 01:02:18 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:18.365950 | orchestrator | 2025-09-16 01:02:18 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:21.398739 | orchestrator | 2025-09-16 01:02:21 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:21.401774 | orchestrator | 2025-09-16 01:02:21 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:21.403569 | orchestrator | 2025-09-16 01:02:21 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:21.404512 | orchestrator | 2025-09-16 01:02:21 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:21.404537 | orchestrator | 2025-09-16 01:02:21 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:24.437916 | orchestrator | 2025-09-16 01:02:24 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:24.438196 | orchestrator | 2025-09-16 01:02:24 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:24.438778 | orchestrator | 2025-09-16 01:02:24 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:24.441795 | orchestrator | 2025-09-16 01:02:24 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:24.441889 | orchestrator | 2025-09-16 01:02:24 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:27.473185 | orchestrator | 2025-09-16 01:02:27 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:27.473300 | orchestrator | 2025-09-16 01:02:27 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:27.474003 | orchestrator | 2025-09-16 01:02:27 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:27.474605 | orchestrator | 2025-09-16 01:02:27 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:27.474629 | orchestrator | 2025-09-16 01:02:27 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:30.511296 | orchestrator | 2025-09-16 01:02:30 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:30.512192 | orchestrator | 2025-09-16 01:02:30 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:30.512223 | orchestrator | 2025-09-16 01:02:30 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:30.512729 | orchestrator | 2025-09-16 01:02:30 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:30.512753 | orchestrator | 2025-09-16 01:02:30 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:33.545842 | orchestrator | 2025-09-16 01:02:33 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:33.546785 | orchestrator | 2025-09-16 01:02:33 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:33.549267 | orchestrator | 2025-09-16 01:02:33 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:33.551082 | orchestrator | 2025-09-16 01:02:33 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:33.551695 | orchestrator | 2025-09-16 01:02:33 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:36.583713 | orchestrator | 2025-09-16 01:02:36 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:36.584297 | orchestrator | 2025-09-16 01:02:36 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:36.585324 | orchestrator | 2025-09-16 01:02:36 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:36.586225 | orchestrator | 2025-09-16 01:02:36 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:36.586548 | orchestrator | 2025-09-16 01:02:36 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:39.613132 | orchestrator | 2025-09-16 01:02:39 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:39.613715 | orchestrator | 2025-09-16 01:02:39 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:39.615675 | orchestrator | 2025-09-16 01:02:39 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:39.616466 | orchestrator | 2025-09-16 01:02:39 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:39.616490 | orchestrator | 2025-09-16 01:02:39 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:42.643065 | orchestrator | 2025-09-16 01:02:42 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:42.646127 | orchestrator | 2025-09-16 01:02:42 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:42.646941 | orchestrator | 2025-09-16 01:02:42 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:42.647697 | orchestrator | 2025-09-16 01:02:42 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:42.647996 | orchestrator | 2025-09-16 01:02:42 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:45.673624 | orchestrator | 2025-09-16 01:02:45 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:45.674784 | orchestrator | 2025-09-16 01:02:45 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:45.674843 | orchestrator | 2025-09-16 01:02:45 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:45.675502 | orchestrator | 2025-09-16 01:02:45 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:45.675529 | orchestrator | 2025-09-16 01:02:45 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:48.698835 | orchestrator | 2025-09-16 01:02:48 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:48.698929 | orchestrator | 2025-09-16 01:02:48 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:48.699771 | orchestrator | 2025-09-16 01:02:48 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:48.700639 | orchestrator | 2025-09-16 01:02:48 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:48.700667 | orchestrator | 2025-09-16 01:02:48 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:51.719931 | orchestrator | 2025-09-16 01:02:51 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:51.721367 | orchestrator | 2025-09-16 01:02:51 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:51.721482 | orchestrator | 2025-09-16 01:02:51 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:51.722330 | orchestrator | 2025-09-16 01:02:51 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:51.722505 | orchestrator | 2025-09-16 01:02:51 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:54.744688 | orchestrator | 2025-09-16 01:02:54 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:54.744904 | orchestrator | 2025-09-16 01:02:54 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state STARTED 2025-09-16 01:02:54.745594 | orchestrator | 2025-09-16 01:02:54 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:54.747981 | orchestrator | 2025-09-16 01:02:54 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:54.748009 | orchestrator | 2025-09-16 01:02:54 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:02:57.776486 | orchestrator | 2025-09-16 01:02:57 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:02:57.776904 | orchestrator | 2025-09-16 01:02:57 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:02:57.778612 | orchestrator | 2025-09-16 01:02:57 | INFO  | Task 8e5f06d4-92a5-47ba-99f0-6e76bbcd2f01 is in state SUCCESS 2025-09-16 01:02:57.780434 | orchestrator | 2025-09-16 01:02:57.780491 | orchestrator | 2025-09-16 01:02:57.780504 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 01:02:57.780516 | orchestrator | 2025-09-16 01:02:57.780803 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 01:02:57.780834 | orchestrator | Tuesday 16 September 2025 00:59:18 +0000 (0:00:00.262) 0:00:00.262 ***** 2025-09-16 01:02:57.780854 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:02:57.780874 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:02:57.780894 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:02:57.780912 | orchestrator | ok: [testbed-node-3] 2025-09-16 01:02:57.780930 | orchestrator | ok: [testbed-node-4] 2025-09-16 01:02:57.780948 | orchestrator | ok: [testbed-node-5] 2025-09-16 01:02:57.780966 | orchestrator | 2025-09-16 01:02:57.780983 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 01:02:57.781000 | orchestrator | Tuesday 16 September 2025 00:59:19 +0000 (0:00:00.576) 0:00:00.838 ***** 2025-09-16 01:02:57.781016 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-16 01:02:57.781034 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-16 01:02:57.781050 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-16 01:02:57.781068 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-16 01:02:57.781086 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-16 01:02:57.781105 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-16 01:02:57.781123 | orchestrator | 2025-09-16 01:02:57.781142 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-16 01:02:57.781238 | orchestrator | 2025-09-16 01:02:57.781261 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-16 01:02:57.781281 | orchestrator | Tuesday 16 September 2025 00:59:19 +0000 (0:00:00.414) 0:00:01.253 ***** 2025-09-16 01:02:57.781303 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 01:02:57.781324 | orchestrator | 2025-09-16 01:02:57.781343 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-16 01:02:57.781363 | orchestrator | Tuesday 16 September 2025 00:59:20 +0000 (0:00:00.778) 0:00:02.031 ***** 2025-09-16 01:02:57.781382 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-16 01:02:57.781401 | orchestrator | 2025-09-16 01:02:57.781417 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-16 01:02:57.781428 | orchestrator | Tuesday 16 September 2025 00:59:23 +0000 (0:00:03.675) 0:00:05.707 ***** 2025-09-16 01:02:57.781454 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-16 01:02:57.781466 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-16 01:02:57.781477 | orchestrator | 2025-09-16 01:02:57.781487 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-16 01:02:57.781498 | orchestrator | Tuesday 16 September 2025 00:59:30 +0000 (0:00:07.052) 0:00:12.759 ***** 2025-09-16 01:02:57.781509 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-16 01:02:57.781519 | orchestrator | 2025-09-16 01:02:57.781530 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-16 01:02:57.781541 | orchestrator | Tuesday 16 September 2025 00:59:34 +0000 (0:00:03.221) 0:00:15.981 ***** 2025-09-16 01:02:57.781551 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-16 01:02:57.781562 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-16 01:02:57.781573 | orchestrator | 2025-09-16 01:02:57.781583 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-16 01:02:57.781594 | orchestrator | Tuesday 16 September 2025 00:59:37 +0000 (0:00:03.309) 0:00:19.291 ***** 2025-09-16 01:02:57.781604 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-16 01:02:57.781615 | orchestrator | 2025-09-16 01:02:57.781625 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-16 01:02:57.781636 | orchestrator | Tuesday 16 September 2025 00:59:40 +0000 (0:00:03.043) 0:00:22.334 ***** 2025-09-16 01:02:57.781646 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-16 01:02:57.781657 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-16 01:02:57.781668 | orchestrator | 2025-09-16 01:02:57.781678 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-16 01:02:57.781689 | orchestrator | Tuesday 16 September 2025 00:59:48 +0000 (0:00:08.328) 0:00:30.663 ***** 2025-09-16 01:02:57.781703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 01:02:57.781748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 01:02:57.781762 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.781778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 01:02:57.781791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.781804 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.781823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.781843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.781859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.781871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.781882 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.781903 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.781932 | orchestrator | 2025-09-16 01:02:57.781958 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-16 01:02:57.781978 | orchestrator | Tuesday 16 September 2025 00:59:51 +0000 (0:00:02.597) 0:00:33.260 ***** 2025-09-16 01:02:57.781995 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:02:57.782012 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:02:57.782105 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:02:57.782127 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:02:57.782148 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:02:57.782192 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:02:57.782212 | orchestrator | 2025-09-16 01:02:57.782227 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-16 01:02:57.782239 | orchestrator | Tuesday 16 September 2025 00:59:52 +0000 (0:00:01.244) 0:00:34.504 ***** 2025-09-16 01:02:57.782250 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:02:57.782261 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:02:57.782271 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:02:57.782282 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 01:02:57.782294 | orchestrator | 2025-09-16 01:02:57.782305 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-16 01:02:57.782316 | orchestrator | Tuesday 16 September 2025 00:59:53 +0000 (0:00:01.067) 0:00:35.571 ***** 2025-09-16 01:02:57.782327 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-16 01:02:57.782338 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-16 01:02:57.782348 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-16 01:02:57.782359 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-16 01:02:57.782370 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-16 01:02:57.782381 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-16 01:02:57.782392 | orchestrator | 2025-09-16 01:02:57.782403 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-16 01:02:57.782413 | orchestrator | Tuesday 16 September 2025 00:59:55 +0000 (0:00:01.994) 0:00:37.566 ***** 2025-09-16 01:02:57.782432 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-16 01:02:57.782446 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-16 01:02:57.782468 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-16 01:02:57.782490 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-16 01:02:57.782502 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-16 01:02:57.782518 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-16 01:02:57.782530 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-16 01:02:57.782550 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-16 01:02:57.782568 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-16 01:02:57.782580 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-16 01:02:57.782596 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-16 01:02:57.782608 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-16 01:02:57.782626 | orchestrator | 2025-09-16 01:02:57.782637 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-16 01:02:57.782648 | orchestrator | Tuesday 16 September 2025 00:59:59 +0000 (0:00:03.896) 0:00:41.462 ***** 2025-09-16 01:02:57.782659 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-16 01:02:57.782671 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-16 01:02:57.782682 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-16 01:02:57.782693 | orchestrator | 2025-09-16 01:02:57.782703 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-16 01:02:57.782714 | orchestrator | Tuesday 16 September 2025 01:00:01 +0000 (0:00:02.024) 0:00:43.487 ***** 2025-09-16 01:02:57.782725 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-16 01:02:57.782736 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-16 01:02:57.782746 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-16 01:02:57.782757 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-16 01:02:57.782768 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-16 01:02:57.782797 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-16 01:02:57.782808 | orchestrator | 2025-09-16 01:02:57.782819 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-16 01:02:57.782830 | orchestrator | Tuesday 16 September 2025 01:00:04 +0000 (0:00:03.317) 0:00:46.804 ***** 2025-09-16 01:02:57.782841 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-16 01:02:57.782852 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-16 01:02:57.782862 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-16 01:02:57.782873 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-16 01:02:57.782884 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-16 01:02:57.782895 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-16 01:02:57.782905 | orchestrator | 2025-09-16 01:02:57.782916 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-16 01:02:57.782927 | orchestrator | Tuesday 16 September 2025 01:00:06 +0000 (0:00:01.046) 0:00:47.851 ***** 2025-09-16 01:02:57.782938 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:02:57.782948 | orchestrator | 2025-09-16 01:02:57.782959 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-16 01:02:57.782970 | orchestrator | Tuesday 16 September 2025 01:00:06 +0000 (0:00:00.099) 0:00:47.951 ***** 2025-09-16 01:02:57.782981 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:02:57.782992 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:02:57.783002 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:02:57.783013 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:02:57.783023 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:02:57.783034 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:02:57.783044 | orchestrator | 2025-09-16 01:02:57.783055 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-16 01:02:57.783066 | orchestrator | Tuesday 16 September 2025 01:00:06 +0000 (0:00:00.597) 0:00:48.548 ***** 2025-09-16 01:02:57.783078 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 01:02:57.783102 | orchestrator | 2025-09-16 01:02:57.783113 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-16 01:02:57.783124 | orchestrator | Tuesday 16 September 2025 01:00:07 +0000 (0:00:01.135) 0:00:49.683 ***** 2025-09-16 01:02:57.783140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 01:02:57.783152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 01:02:57.783238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 01:02:57.783250 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.783268 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.783287 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.783298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.783310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.783329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.783341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.783359 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.783375 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.783386 | orchestrator | 2025-09-16 01:02:57.783397 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-16 01:02:57.783408 | orchestrator | Tuesday 16 September 2025 01:00:10 +0000 (0:00:03.101) 0:00:52.785 ***** 2025-09-16 01:02:57.783418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-16 01:02:57.783435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-16 01:02:57.783445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783471 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:02:57.783481 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:02:57.783495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-16 01:02:57.783506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783515 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:02:57.783525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783551 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:02:57.783567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783598 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:02:57.783616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783651 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:02:57.783667 | orchestrator | 2025-09-16 01:02:57.783682 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-16 01:02:57.783698 | orchestrator | Tuesday 16 September 2025 01:00:12 +0000 (0:00:01.876) 0:00:54.662 ***** 2025-09-16 01:02:57.783725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-16 01:02:57.783754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-16 01:02:57.783788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783798 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:02:57.783807 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:02:57.783817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-16 01:02:57.783834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783850 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:02:57.783860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783884 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:02:57.783894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783914 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:02:57.783929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.783955 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:02:57.783964 | orchestrator | 2025-09-16 01:02:57.783974 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-16 01:02:57.783984 | orchestrator | Tuesday 16 September 2025 01:00:14 +0000 (0:00:01.548) 0:00:56.210 ***** 2025-09-16 01:02:57.784001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 01:02:57.784012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 01:02:57.784022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 01:02:57.784044 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784054 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784068 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784120 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784144 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784178 | orchestrator | 2025-09-16 01:02:57.784189 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-16 01:02:57.784199 | orchestrator | Tuesday 16 September 2025 01:00:17 +0000 (0:00:03.455) 0:00:59.666 ***** 2025-09-16 01:02:57.784209 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-16 01:02:57.784219 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:02:57.784229 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-16 01:02:57.784238 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:02:57.784248 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-16 01:02:57.784257 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-16 01:02:57.784267 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:02:57.784276 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-16 01:02:57.784286 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-16 01:02:57.784295 | orchestrator | 2025-09-16 01:02:57.784305 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-16 01:02:57.784314 | orchestrator | Tuesday 16 September 2025 01:00:20 +0000 (0:00:02.497) 0:01:02.163 ***** 2025-09-16 01:02:57.784330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 01:02:57.784347 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 01:02:57.784372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784382 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 01:02:57.784414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784424 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784449 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784459 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.784484 | orchestrator | 2025-09-16 01:02:57.784494 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-16 01:02:57.784503 | orchestrator | Tuesday 16 September 2025 01:00:34 +0000 (0:00:14.005) 0:01:16.168 ***** 2025-09-16 01:02:57.784518 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:02:57.784529 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:02:57.784538 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:02:57.784548 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:02:57.784558 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:02:57.784567 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:02:57.784577 | orchestrator | 2025-09-16 01:02:57.784586 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-16 01:02:57.784596 | orchestrator | Tuesday 16 September 2025 01:00:36 +0000 (0:00:02.068) 0:01:18.236 ***** 2025-09-16 01:02:57.784606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-16 01:02:57.784620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.784631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-16 01:02:57.784647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.784657 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:02:57.784666 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:02:57.784682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-16 01:02:57.784692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.784702 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:02:57.784712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.784727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.784752 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:02:57.784770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.784787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.784804 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:02:57.784829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.784849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-16 01:02:57.784866 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:02:57.784884 | orchestrator | 2025-09-16 01:02:57.784901 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-16 01:02:57.784915 | orchestrator | Tuesday 16 September 2025 01:00:38 +0000 (0:00:01.617) 0:01:19.854 ***** 2025-09-16 01:02:57.784925 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:02:57.784944 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:02:57.784954 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:02:57.784972 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:02:57.784982 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:02:57.784991 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:02:57.785001 | orchestrator | 2025-09-16 01:02:57.785010 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-16 01:02:57.785020 | orchestrator | Tuesday 16 September 2025 01:00:38 +0000 (0:00:00.492) 0:01:20.347 ***** 2025-09-16 01:02:57.785030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 01:02:57.785040 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.785057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 01:02:57.785068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-16 01:02:57.785092 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.785102 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.785113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.785225 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.785240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.785250 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.785273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.785283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:02:57.785293 | orchestrator | 2025-09-16 01:02:57.785303 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-16 01:02:57.785313 | orchestrator | Tuesday 16 September 2025 01:00:40 +0000 (0:00:02.088) 0:01:22.435 ***** 2025-09-16 01:02:57.785323 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:02:57.785333 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:02:57.785343 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:02:57.785352 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:02:57.785362 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:02:57.785371 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:02:57.785381 | orchestrator | 2025-09-16 01:02:57.785390 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-16 01:02:57.785400 | orchestrator | Tuesday 16 September 2025 01:00:41 +0000 (0:00:00.463) 0:01:22.899 ***** 2025-09-16 01:02:57.785410 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:02:57.785419 | orchestrator | 2025-09-16 01:02:57.785429 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-16 01:02:57.785438 | orchestrator | Tuesday 16 September 2025 01:00:43 +0000 (0:00:02.339) 0:01:25.239 ***** 2025-09-16 01:02:57.785448 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:02:57.785457 | orchestrator | 2025-09-16 01:02:57.785467 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-16 01:02:57.785476 | orchestrator | Tuesday 16 September 2025 01:00:45 +0000 (0:00:02.268) 0:01:27.507 ***** 2025-09-16 01:02:57.785486 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:02:57.785496 | orchestrator | 2025-09-16 01:02:57.785505 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-16 01:02:57.785515 | orchestrator | Tuesday 16 September 2025 01:01:05 +0000 (0:00:20.217) 0:01:47.724 ***** 2025-09-16 01:02:57.785524 | orchestrator | 2025-09-16 01:02:57.785539 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-16 01:02:57.785548 | orchestrator | Tuesday 16 September 2025 01:01:05 +0000 (0:00:00.061) 0:01:47.786 ***** 2025-09-16 01:02:57.785558 | orchestrator | 2025-09-16 01:02:57.785568 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-16 01:02:57.785578 | orchestrator | Tuesday 16 September 2025 01:01:06 +0000 (0:00:00.056) 0:01:47.842 ***** 2025-09-16 01:02:57.785593 | orchestrator | 2025-09-16 01:02:57.785603 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-16 01:02:57.785612 | orchestrator | Tuesday 16 September 2025 01:01:06 +0000 (0:00:00.059) 0:01:47.902 ***** 2025-09-16 01:02:57.785622 | orchestrator | 2025-09-16 01:02:57.785631 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-16 01:02:57.785641 | orchestrator | Tuesday 16 September 2025 01:01:06 +0000 (0:00:00.058) 0:01:47.961 ***** 2025-09-16 01:02:57.785650 | orchestrator | 2025-09-16 01:02:57.785660 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-16 01:02:57.785670 | orchestrator | Tuesday 16 September 2025 01:01:06 +0000 (0:00:00.058) 0:01:48.019 ***** 2025-09-16 01:02:57.785679 | orchestrator | 2025-09-16 01:02:57.785689 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-16 01:02:57.785698 | orchestrator | Tuesday 16 September 2025 01:01:06 +0000 (0:00:00.062) 0:01:48.082 ***** 2025-09-16 01:02:57.785708 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:02:57.785717 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:02:57.785727 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:02:57.785736 | orchestrator | 2025-09-16 01:02:57.785746 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-16 01:02:57.785755 | orchestrator | Tuesday 16 September 2025 01:01:26 +0000 (0:00:20.315) 0:02:08.397 ***** 2025-09-16 01:02:57.785765 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:02:57.785775 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:02:57.785784 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:02:57.785793 | orchestrator | 2025-09-16 01:02:57.785803 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-16 01:02:57.785813 | orchestrator | Tuesday 16 September 2025 01:01:36 +0000 (0:00:10.043) 0:02:18.441 ***** 2025-09-16 01:02:57.785822 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:02:57.785832 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:02:57.785841 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:02:57.785851 | orchestrator | 2025-09-16 01:02:57.785860 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-16 01:02:57.785874 | orchestrator | Tuesday 16 September 2025 01:02:48 +0000 (0:01:11.882) 0:03:30.324 ***** 2025-09-16 01:02:57.785884 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:02:57.785893 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:02:57.785903 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:02:57.785912 | orchestrator | 2025-09-16 01:02:57.785922 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-16 01:02:57.785932 | orchestrator | Tuesday 16 September 2025 01:02:55 +0000 (0:00:07.067) 0:03:37.391 ***** 2025-09-16 01:02:57.785941 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:02:57.785951 | orchestrator | 2025-09-16 01:02:57.785960 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 01:02:57.785970 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-16 01:02:57.785981 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-16 01:02:57.785991 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-16 01:02:57.786001 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-16 01:02:57.786011 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-16 01:02:57.786049 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-16 01:02:57.786064 | orchestrator | 2025-09-16 01:02:57.786074 | orchestrator | 2025-09-16 01:02:57.786084 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 01:02:57.786093 | orchestrator | Tuesday 16 September 2025 01:02:56 +0000 (0:00:00.589) 0:03:37.981 ***** 2025-09-16 01:02:57.786103 | orchestrator | =============================================================================== 2025-09-16 01:02:57.786113 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 71.88s 2025-09-16 01:02:57.786122 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 20.32s 2025-09-16 01:02:57.786132 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.22s 2025-09-16 01:02:57.786142 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 14.01s 2025-09-16 01:02:57.786151 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.04s 2025-09-16 01:02:57.786179 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.33s 2025-09-16 01:02:57.786189 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 7.07s 2025-09-16 01:02:57.786199 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.05s 2025-09-16 01:02:57.786214 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.90s 2025-09-16 01:02:57.786225 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.68s 2025-09-16 01:02:57.786235 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.46s 2025-09-16 01:02:57.786244 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.32s 2025-09-16 01:02:57.786254 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.31s 2025-09-16 01:02:57.786263 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.22s 2025-09-16 01:02:57.786273 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.10s 2025-09-16 01:02:57.786282 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.04s 2025-09-16 01:02:57.786292 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.60s 2025-09-16 01:02:57.786302 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.50s 2025-09-16 01:02:57.786311 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.34s 2025-09-16 01:02:57.786321 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.27s 2025-09-16 01:02:57.786330 | orchestrator | 2025-09-16 01:02:57 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:02:57.786340 | orchestrator | 2025-09-16 01:02:57 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:02:57.786350 | orchestrator | 2025-09-16 01:02:57 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:00.803513 | orchestrator | 2025-09-16 01:03:00 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:00.804490 | orchestrator | 2025-09-16 01:03:00 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:00.805951 | orchestrator | 2025-09-16 01:03:00 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:00.807004 | orchestrator | 2025-09-16 01:03:00 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:03:00.807102 | orchestrator | 2025-09-16 01:03:00 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:03.864522 | orchestrator | 2025-09-16 01:03:03 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:03.864624 | orchestrator | 2025-09-16 01:03:03 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:03.865043 | orchestrator | 2025-09-16 01:03:03 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:03.866718 | orchestrator | 2025-09-16 01:03:03 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:03:03.866807 | orchestrator | 2025-09-16 01:03:03 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:06.892461 | orchestrator | 2025-09-16 01:03:06 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:06.892884 | orchestrator | 2025-09-16 01:03:06 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:06.893642 | orchestrator | 2025-09-16 01:03:06 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:06.894480 | orchestrator | 2025-09-16 01:03:06 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:03:06.894511 | orchestrator | 2025-09-16 01:03:06 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:09.925032 | orchestrator | 2025-09-16 01:03:09 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:09.925368 | orchestrator | 2025-09-16 01:03:09 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:09.925982 | orchestrator | 2025-09-16 01:03:09 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:09.926659 | orchestrator | 2025-09-16 01:03:09 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:03:09.926697 | orchestrator | 2025-09-16 01:03:09 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:12.951086 | orchestrator | 2025-09-16 01:03:12 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:12.951465 | orchestrator | 2025-09-16 01:03:12 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:12.952886 | orchestrator | 2025-09-16 01:03:12 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:12.954347 | orchestrator | 2025-09-16 01:03:12 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:03:12.954382 | orchestrator | 2025-09-16 01:03:12 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:15.979911 | orchestrator | 2025-09-16 01:03:15 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:15.981015 | orchestrator | 2025-09-16 01:03:15 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:15.981530 | orchestrator | 2025-09-16 01:03:15 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:15.982264 | orchestrator | 2025-09-16 01:03:15 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:03:15.982369 | orchestrator | 2025-09-16 01:03:15 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:19.019416 | orchestrator | 2025-09-16 01:03:19 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:19.024452 | orchestrator | 2025-09-16 01:03:19 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:19.026839 | orchestrator | 2025-09-16 01:03:19 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:19.029516 | orchestrator | 2025-09-16 01:03:19 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:03:19.031145 | orchestrator | 2025-09-16 01:03:19 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:22.054502 | orchestrator | 2025-09-16 01:03:22 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:22.055988 | orchestrator | 2025-09-16 01:03:22 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:22.057422 | orchestrator | 2025-09-16 01:03:22 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:22.058409 | orchestrator | 2025-09-16 01:03:22 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:03:22.058691 | orchestrator | 2025-09-16 01:03:22 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:25.089587 | orchestrator | 2025-09-16 01:03:25 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:25.089802 | orchestrator | 2025-09-16 01:03:25 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:25.091063 | orchestrator | 2025-09-16 01:03:25 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:25.092472 | orchestrator | 2025-09-16 01:03:25 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:03:25.092682 | orchestrator | 2025-09-16 01:03:25 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:28.117869 | orchestrator | 2025-09-16 01:03:28 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:28.117976 | orchestrator | 2025-09-16 01:03:28 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:28.118202 | orchestrator | 2025-09-16 01:03:28 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:28.121346 | orchestrator | 2025-09-16 01:03:28 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:03:28.121369 | orchestrator | 2025-09-16 01:03:28 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:31.150643 | orchestrator | 2025-09-16 01:03:31 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:31.151226 | orchestrator | 2025-09-16 01:03:31 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:31.152042 | orchestrator | 2025-09-16 01:03:31 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:31.153387 | orchestrator | 2025-09-16 01:03:31 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:03:31.153411 | orchestrator | 2025-09-16 01:03:31 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:34.183509 | orchestrator | 2025-09-16 01:03:34 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:34.183809 | orchestrator | 2025-09-16 01:03:34 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:34.184761 | orchestrator | 2025-09-16 01:03:34 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:34.185437 | orchestrator | 2025-09-16 01:03:34 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:03:34.185499 | orchestrator | 2025-09-16 01:03:34 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:37.218004 | orchestrator | 2025-09-16 01:03:37 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:37.218443 | orchestrator | 2025-09-16 01:03:37 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:37.219535 | orchestrator | 2025-09-16 01:03:37 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:37.220499 | orchestrator | 2025-09-16 01:03:37 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:03:37.220622 | orchestrator | 2025-09-16 01:03:37 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:40.241719 | orchestrator | 2025-09-16 01:03:40 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:40.241967 | orchestrator | 2025-09-16 01:03:40 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:40.243686 | orchestrator | 2025-09-16 01:03:40 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:40.244434 | orchestrator | 2025-09-16 01:03:40 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state STARTED 2025-09-16 01:03:40.244459 | orchestrator | 2025-09-16 01:03:40 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:43.277696 | orchestrator | 2025-09-16 01:03:43 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:43.277853 | orchestrator | 2025-09-16 01:03:43 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:43.278501 | orchestrator | 2025-09-16 01:03:43 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:43.279079 | orchestrator | 2025-09-16 01:03:43 | INFO  | Task 642e7bf6-e04b-438e-86f0-0a0c3ed69340 is in state STARTED 2025-09-16 01:03:43.280397 | orchestrator | 2025-09-16 01:03:43 | INFO  | Task 0f74735c-e53d-4ed0-9f74-c5c03734b507 is in state SUCCESS 2025-09-16 01:03:43.281688 | orchestrator | 2025-09-16 01:03:43.281733 | orchestrator | 2025-09-16 01:03:43.281763 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 01:03:43.281777 | orchestrator | 2025-09-16 01:03:43.281789 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 01:03:43.281801 | orchestrator | Tuesday 16 September 2025 01:01:44 +0000 (0:00:00.271) 0:00:00.271 ***** 2025-09-16 01:03:43.281812 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:03:43.281863 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:03:43.281875 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:03:43.281918 | orchestrator | 2025-09-16 01:03:43.281929 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 01:03:43.281940 | orchestrator | Tuesday 16 September 2025 01:01:44 +0000 (0:00:00.338) 0:00:00.609 ***** 2025-09-16 01:03:43.281951 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-16 01:03:43.281963 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-16 01:03:43.281974 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-16 01:03:43.281985 | orchestrator | 2025-09-16 01:03:43.281997 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-16 01:03:43.282008 | orchestrator | 2025-09-16 01:03:43.282340 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-16 01:03:43.282360 | orchestrator | Tuesday 16 September 2025 01:01:44 +0000 (0:00:00.370) 0:00:00.979 ***** 2025-09-16 01:03:43.282372 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:03:43.282384 | orchestrator | 2025-09-16 01:03:43.282395 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-16 01:03:43.282406 | orchestrator | Tuesday 16 September 2025 01:01:45 +0000 (0:00:00.533) 0:00:01.513 ***** 2025-09-16 01:03:43.282418 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-16 01:03:43.282429 | orchestrator | 2025-09-16 01:03:43.282440 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-16 01:03:43.282450 | orchestrator | Tuesday 16 September 2025 01:01:48 +0000 (0:00:03.525) 0:00:05.039 ***** 2025-09-16 01:03:43.282461 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-16 01:03:43.282497 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-16 01:03:43.282509 | orchestrator | 2025-09-16 01:03:43.282520 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-16 01:03:43.282531 | orchestrator | Tuesday 16 September 2025 01:01:54 +0000 (0:00:05.619) 0:00:10.659 ***** 2025-09-16 01:03:43.282542 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-16 01:03:43.282553 | orchestrator | 2025-09-16 01:03:43.282564 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-16 01:03:43.282574 | orchestrator | Tuesday 16 September 2025 01:01:57 +0000 (0:00:03.026) 0:00:13.686 ***** 2025-09-16 01:03:43.282585 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-16 01:03:43.282596 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-16 01:03:43.282607 | orchestrator | 2025-09-16 01:03:43.282618 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-16 01:03:43.282628 | orchestrator | Tuesday 16 September 2025 01:02:01 +0000 (0:00:04.008) 0:00:17.694 ***** 2025-09-16 01:03:43.282639 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-16 01:03:43.282650 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-16 01:03:43.282661 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-16 01:03:43.282672 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-16 01:03:43.282683 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-16 01:03:43.282694 | orchestrator | 2025-09-16 01:03:43.282705 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-16 01:03:43.282716 | orchestrator | Tuesday 16 September 2025 01:02:17 +0000 (0:00:16.396) 0:00:34.090 ***** 2025-09-16 01:03:43.282727 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-16 01:03:43.282737 | orchestrator | 2025-09-16 01:03:43.282748 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-16 01:03:43.282759 | orchestrator | Tuesday 16 September 2025 01:02:22 +0000 (0:00:04.681) 0:00:38.772 ***** 2025-09-16 01:03:43.282773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 01:03:43.282810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 01:03:43.282831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 01:03:43.282843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.282856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.282868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.282892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.282906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.282924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.282935 | orchestrator | 2025-09-16 01:03:43.282949 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-16 01:03:43.282962 | orchestrator | Tuesday 16 September 2025 01:02:24 +0000 (0:00:01.812) 0:00:40.584 ***** 2025-09-16 01:03:43.282975 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-16 01:03:43.282988 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-16 01:03:43.283001 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-16 01:03:43.283014 | orchestrator | 2025-09-16 01:03:43.283026 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-16 01:03:43.283039 | orchestrator | Tuesday 16 September 2025 01:02:25 +0000 (0:00:00.878) 0:00:41.463 ***** 2025-09-16 01:03:43.283051 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:03:43.283064 | orchestrator | 2025-09-16 01:03:43.283076 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-16 01:03:43.283089 | orchestrator | Tuesday 16 September 2025 01:02:25 +0000 (0:00:00.110) 0:00:41.574 ***** 2025-09-16 01:03:43.283102 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:03:43.283114 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:03:43.283127 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:03:43.283140 | orchestrator | 2025-09-16 01:03:43.283152 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-16 01:03:43.283185 | orchestrator | Tuesday 16 September 2025 01:02:25 +0000 (0:00:00.355) 0:00:41.929 ***** 2025-09-16 01:03:43.283198 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:03:43.283212 | orchestrator | 2025-09-16 01:03:43.283224 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-16 01:03:43.283237 | orchestrator | Tuesday 16 September 2025 01:02:26 +0000 (0:00:00.822) 0:00:42.751 ***** 2025-09-16 01:03:43.283250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 01:03:43.283277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 01:03:43.283299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 01:03:43.283311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.283322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.283334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.283345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.283376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.283389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.283400 | orchestrator | 2025-09-16 01:03:43.283411 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-16 01:03:43.283422 | orchestrator | Tuesday 16 September 2025 01:02:30 +0000 (0:00:03.617) 0:00:46.369 ***** 2025-09-16 01:03:43.283433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-16 01:03:43.283445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.283456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.283468 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:03:43.283490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-16 01:03:43.283509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.283521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.283532 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:03:43.283544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-16 01:03:43.283555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.283567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.283584 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:03:43.283595 | orchestrator | 2025-09-16 01:03:43.283606 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-16 01:03:43.283617 | orchestrator | Tuesday 16 September 2025 01:02:30 +0000 (0:00:00.763) 0:00:47.133 ***** 2025-09-16 01:03:43.283648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-16 01:03:43.283660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.283672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.283683 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:03:43.283695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-16 01:03:43.283706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.283724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.283735 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:03:43.283758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-16 01:03:43.283771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.283782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.283794 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:03:43.283804 | orchestrator | 2025-09-16 01:03:43.283815 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-16 01:03:43.283826 | orchestrator | Tuesday 16 September 2025 01:02:32 +0000 (0:00:01.322) 0:00:48.455 ***** 2025-09-16 01:03:43.283838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 01:03:43.283867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 01:03:43.283880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 01:03:43.283891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.283903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.283914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.283932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.283948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.283964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.283976 | orchestrator | 2025-09-16 01:03:43.283987 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-16 01:03:43.283998 | orchestrator | Tuesday 16 September 2025 01:02:35 +0000 (0:00:03.425) 0:00:51.881 ***** 2025-09-16 01:03:43.284009 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:03:43.284020 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:03:43.284031 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:03:43.284042 | orchestrator | 2025-09-16 01:03:43.284052 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-16 01:03:43.284063 | orchestrator | Tuesday 16 September 2025 01:02:38 +0000 (0:00:02.402) 0:00:54.283 ***** 2025-09-16 01:03:43.284074 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-16 01:03:43.284085 | orchestrator | 2025-09-16 01:03:43.284096 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-16 01:03:43.284106 | orchestrator | Tuesday 16 September 2025 01:02:39 +0000 (0:00:01.160) 0:00:55.444 ***** 2025-09-16 01:03:43.284117 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:03:43.284128 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:03:43.284139 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:03:43.284150 | orchestrator | 2025-09-16 01:03:43.284193 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-16 01:03:43.284205 | orchestrator | Tuesday 16 September 2025 01:02:40 +0000 (0:00:00.944) 0:00:56.388 ***** 2025-09-16 01:03:43.284216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 01:03:43.284235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 01:03:43.284258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 01:03:43.284271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.284283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.284294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.284312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.284323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.284334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.284346 | orchestrator | 2025-09-16 01:03:43.284357 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-16 01:03:43.284368 | orchestrator | Tuesday 16 September 2025 01:02:47 +0000 (0:00:07.745) 0:01:04.133 ***** 2025-09-16 01:03:43.284391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-16 01:03:43.284403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.284421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.284432 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:03:43.284444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-16 01:03:43.284456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.284477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.284489 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:03:43.284500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-16 01:03:43.284512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.284530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:03:43.284541 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:03:43.284552 | orchestrator | 2025-09-16 01:03:43.284563 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-16 01:03:43.284574 | orchestrator | Tuesday 16 September 2025 01:02:49 +0000 (0:00:01.138) 0:01:05.272 ***** 2025-09-16 01:03:43.284586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 01:03:43.284608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 01:03:43.284620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-16 01:03:43.284642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.284654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.284665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.284677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.284700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.284712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:03:43.284730 | orchestrator | 2025-09-16 01:03:43.284742 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-16 01:03:43.284753 | orchestrator | Tuesday 16 September 2025 01:02:52 +0000 (0:00:03.586) 0:01:08.859 ***** 2025-09-16 01:03:43.284764 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:03:43.284775 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:03:43.284786 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:03:43.284796 | orchestrator | 2025-09-16 01:03:43.284807 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-16 01:03:43.284818 | orchestrator | Tuesday 16 September 2025 01:02:53 +0000 (0:00:00.353) 0:01:09.213 ***** 2025-09-16 01:03:43.284829 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:03:43.284840 | orchestrator | 2025-09-16 01:03:43.284851 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-16 01:03:43.284861 | orchestrator | Tuesday 16 September 2025 01:02:55 +0000 (0:00:02.420) 0:01:11.633 ***** 2025-09-16 01:03:43.284872 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:03:43.284883 | orchestrator | 2025-09-16 01:03:43.284894 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-16 01:03:43.284905 | orchestrator | Tuesday 16 September 2025 01:02:57 +0000 (0:00:02.283) 0:01:13.917 ***** 2025-09-16 01:03:43.284915 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:03:43.284926 | orchestrator | 2025-09-16 01:03:43.284937 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-16 01:03:43.284947 | orchestrator | Tuesday 16 September 2025 01:03:09 +0000 (0:00:11.983) 0:01:25.901 ***** 2025-09-16 01:03:43.284958 | orchestrator | 2025-09-16 01:03:43.284969 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-16 01:03:43.284979 | orchestrator | Tuesday 16 September 2025 01:03:09 +0000 (0:00:00.154) 0:01:26.056 ***** 2025-09-16 01:03:43.284990 | orchestrator | 2025-09-16 01:03:43.285001 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-16 01:03:43.285011 | orchestrator | Tuesday 16 September 2025 01:03:09 +0000 (0:00:00.117) 0:01:26.173 ***** 2025-09-16 01:03:43.285022 | orchestrator | 2025-09-16 01:03:43.285032 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-16 01:03:43.285043 | orchestrator | Tuesday 16 September 2025 01:03:10 +0000 (0:00:00.125) 0:01:26.298 ***** 2025-09-16 01:03:43.285054 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:03:43.285064 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:03:43.285075 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:03:43.285086 | orchestrator | 2025-09-16 01:03:43.285097 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-16 01:03:43.285108 | orchestrator | Tuesday 16 September 2025 01:03:18 +0000 (0:00:08.418) 0:01:34.716 ***** 2025-09-16 01:03:43.285118 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:03:43.285129 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:03:43.285140 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:03:43.285150 | orchestrator | 2025-09-16 01:03:43.285219 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-16 01:03:43.285232 | orchestrator | Tuesday 16 September 2025 01:03:29 +0000 (0:00:11.189) 0:01:45.906 ***** 2025-09-16 01:03:43.285243 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:03:43.285254 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:03:43.285264 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:03:43.285275 | orchestrator | 2025-09-16 01:03:43.285286 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 01:03:43.285298 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-16 01:03:43.285311 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 01:03:43.285323 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 01:03:43.285341 | orchestrator | 2025-09-16 01:03:43.285352 | orchestrator | 2025-09-16 01:03:43.285363 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 01:03:43.285373 | orchestrator | Tuesday 16 September 2025 01:03:41 +0000 (0:00:12.092) 0:01:57.998 ***** 2025-09-16 01:03:43.285384 | orchestrator | =============================================================================== 2025-09-16 01:03:43.285395 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.40s 2025-09-16 01:03:43.285412 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.09s 2025-09-16 01:03:43.285429 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.98s 2025-09-16 01:03:43.285440 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.19s 2025-09-16 01:03:43.285451 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.42s 2025-09-16 01:03:43.285462 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 7.75s 2025-09-16 01:03:43.285474 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 5.62s 2025-09-16 01:03:43.285493 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.68s 2025-09-16 01:03:43.285512 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.01s 2025-09-16 01:03:43.285530 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.62s 2025-09-16 01:03:43.285545 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.59s 2025-09-16 01:03:43.285561 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.53s 2025-09-16 01:03:43.285577 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.43s 2025-09-16 01:03:43.285593 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.03s 2025-09-16 01:03:43.285609 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.42s 2025-09-16 01:03:43.285625 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.40s 2025-09-16 01:03:43.285642 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.28s 2025-09-16 01:03:43.285660 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.81s 2025-09-16 01:03:43.285675 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.32s 2025-09-16 01:03:43.285684 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.16s 2025-09-16 01:03:43.285694 | orchestrator | 2025-09-16 01:03:43 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:46.322392 | orchestrator | 2025-09-16 01:03:46 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:46.322615 | orchestrator | 2025-09-16 01:03:46 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:46.323098 | orchestrator | 2025-09-16 01:03:46 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:46.323736 | orchestrator | 2025-09-16 01:03:46 | INFO  | Task 642e7bf6-e04b-438e-86f0-0a0c3ed69340 is in state STARTED 2025-09-16 01:03:46.323758 | orchestrator | 2025-09-16 01:03:46 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:49.359306 | orchestrator | 2025-09-16 01:03:49 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:49.360826 | orchestrator | 2025-09-16 01:03:49 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:49.362699 | orchestrator | 2025-09-16 01:03:49 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:49.365058 | orchestrator | 2025-09-16 01:03:49 | INFO  | Task 642e7bf6-e04b-438e-86f0-0a0c3ed69340 is in state STARTED 2025-09-16 01:03:49.365237 | orchestrator | 2025-09-16 01:03:49 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:52.403342 | orchestrator | 2025-09-16 01:03:52 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:52.403721 | orchestrator | 2025-09-16 01:03:52 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:52.404907 | orchestrator | 2025-09-16 01:03:52 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:52.405807 | orchestrator | 2025-09-16 01:03:52 | INFO  | Task 642e7bf6-e04b-438e-86f0-0a0c3ed69340 is in state STARTED 2025-09-16 01:03:52.405915 | orchestrator | 2025-09-16 01:03:52 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:55.452762 | orchestrator | 2025-09-16 01:03:55 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:55.454978 | orchestrator | 2025-09-16 01:03:55 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:55.459686 | orchestrator | 2025-09-16 01:03:55 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:55.461880 | orchestrator | 2025-09-16 01:03:55 | INFO  | Task 642e7bf6-e04b-438e-86f0-0a0c3ed69340 is in state STARTED 2025-09-16 01:03:55.464426 | orchestrator | 2025-09-16 01:03:55 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:03:58.494560 | orchestrator | 2025-09-16 01:03:58 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:03:58.495595 | orchestrator | 2025-09-16 01:03:58 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:03:58.496285 | orchestrator | 2025-09-16 01:03:58 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:03:58.497133 | orchestrator | 2025-09-16 01:03:58 | INFO  | Task 642e7bf6-e04b-438e-86f0-0a0c3ed69340 is in state STARTED 2025-09-16 01:03:58.497473 | orchestrator | 2025-09-16 01:03:58 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:01.571974 | orchestrator | 2025-09-16 01:04:01 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:01.573779 | orchestrator | 2025-09-16 01:04:01 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:01.574552 | orchestrator | 2025-09-16 01:04:01 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:01.575020 | orchestrator | 2025-09-16 01:04:01 | INFO  | Task 642e7bf6-e04b-438e-86f0-0a0c3ed69340 is in state STARTED 2025-09-16 01:04:01.575129 | orchestrator | 2025-09-16 01:04:01 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:04.605035 | orchestrator | 2025-09-16 01:04:04 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:04.606115 | orchestrator | 2025-09-16 01:04:04 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:04.607547 | orchestrator | 2025-09-16 01:04:04 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:04.608242 | orchestrator | 2025-09-16 01:04:04 | INFO  | Task 642e7bf6-e04b-438e-86f0-0a0c3ed69340 is in state STARTED 2025-09-16 01:04:04.608824 | orchestrator | 2025-09-16 01:04:04 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:07.651364 | orchestrator | 2025-09-16 01:04:07 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:07.651722 | orchestrator | 2025-09-16 01:04:07 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:07.652563 | orchestrator | 2025-09-16 01:04:07 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:07.653827 | orchestrator | 2025-09-16 01:04:07 | INFO  | Task 642e7bf6-e04b-438e-86f0-0a0c3ed69340 is in state STARTED 2025-09-16 01:04:07.653849 | orchestrator | 2025-09-16 01:04:07 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:10.683684 | orchestrator | 2025-09-16 01:04:10 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:10.684683 | orchestrator | 2025-09-16 01:04:10 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:10.684723 | orchestrator | 2025-09-16 01:04:10 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:10.685388 | orchestrator | 2025-09-16 01:04:10 | INFO  | Task 642e7bf6-e04b-438e-86f0-0a0c3ed69340 is in state STARTED 2025-09-16 01:04:10.685429 | orchestrator | 2025-09-16 01:04:10 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:13.715245 | orchestrator | 2025-09-16 01:04:13 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:13.715734 | orchestrator | 2025-09-16 01:04:13 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:13.716643 | orchestrator | 2025-09-16 01:04:13 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:13.717480 | orchestrator | 2025-09-16 01:04:13 | INFO  | Task 642e7bf6-e04b-438e-86f0-0a0c3ed69340 is in state STARTED 2025-09-16 01:04:13.719333 | orchestrator | 2025-09-16 01:04:13 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:16.754000 | orchestrator | 2025-09-16 01:04:16 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:16.754313 | orchestrator | 2025-09-16 01:04:16 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:16.755850 | orchestrator | 2025-09-16 01:04:16 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:16.756500 | orchestrator | 2025-09-16 01:04:16 | INFO  | Task 642e7bf6-e04b-438e-86f0-0a0c3ed69340 is in state STARTED 2025-09-16 01:04:16.756526 | orchestrator | 2025-09-16 01:04:16 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:19.801020 | orchestrator | 2025-09-16 01:04:19 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:19.803098 | orchestrator | 2025-09-16 01:04:19 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:19.803956 | orchestrator | 2025-09-16 01:04:19 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:19.806302 | orchestrator | 2025-09-16 01:04:19 | INFO  | Task 642e7bf6-e04b-438e-86f0-0a0c3ed69340 is in state STARTED 2025-09-16 01:04:19.806329 | orchestrator | 2025-09-16 01:04:19 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:22.831081 | orchestrator | 2025-09-16 01:04:22 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:22.831860 | orchestrator | 2025-09-16 01:04:22 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:22.833702 | orchestrator | 2025-09-16 01:04:22 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:22.834901 | orchestrator | 2025-09-16 01:04:22 | INFO  | Task 642e7bf6-e04b-438e-86f0-0a0c3ed69340 is in state STARTED 2025-09-16 01:04:22.834928 | orchestrator | 2025-09-16 01:04:22 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:25.864015 | orchestrator | 2025-09-16 01:04:25 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:25.866760 | orchestrator | 2025-09-16 01:04:25 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:25.868017 | orchestrator | 2025-09-16 01:04:25 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:25.868045 | orchestrator | 2025-09-16 01:04:25 | INFO  | Task 642e7bf6-e04b-438e-86f0-0a0c3ed69340 is in state SUCCESS 2025-09-16 01:04:25.868057 | orchestrator | 2025-09-16 01:04:25 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:28.913413 | orchestrator | 2025-09-16 01:04:28 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:28.915983 | orchestrator | 2025-09-16 01:04:28 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:28.916044 | orchestrator | 2025-09-16 01:04:28 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:04:28.916397 | orchestrator | 2025-09-16 01:04:28 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:28.916422 | orchestrator | 2025-09-16 01:04:28 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:31.942941 | orchestrator | 2025-09-16 01:04:31 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:31.943877 | orchestrator | 2025-09-16 01:04:31 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:31.945590 | orchestrator | 2025-09-16 01:04:31 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:04:31.946360 | orchestrator | 2025-09-16 01:04:31 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:31.946391 | orchestrator | 2025-09-16 01:04:31 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:34.976709 | orchestrator | 2025-09-16 01:04:34 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:34.977087 | orchestrator | 2025-09-16 01:04:34 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:34.977688 | orchestrator | 2025-09-16 01:04:34 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:04:34.978397 | orchestrator | 2025-09-16 01:04:34 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:34.978520 | orchestrator | 2025-09-16 01:04:34 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:38.005342 | orchestrator | 2025-09-16 01:04:38 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:38.006972 | orchestrator | 2025-09-16 01:04:38 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:38.007593 | orchestrator | 2025-09-16 01:04:38 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:04:38.008374 | orchestrator | 2025-09-16 01:04:38 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:38.008531 | orchestrator | 2025-09-16 01:04:38 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:41.047879 | orchestrator | 2025-09-16 01:04:41 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:41.047963 | orchestrator | 2025-09-16 01:04:41 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:41.048190 | orchestrator | 2025-09-16 01:04:41 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:04:41.049647 | orchestrator | 2025-09-16 01:04:41 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:41.049682 | orchestrator | 2025-09-16 01:04:41 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:44.082751 | orchestrator | 2025-09-16 01:04:44 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:44.083902 | orchestrator | 2025-09-16 01:04:44 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:44.085554 | orchestrator | 2025-09-16 01:04:44 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:04:44.085597 | orchestrator | 2025-09-16 01:04:44 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:44.085609 | orchestrator | 2025-09-16 01:04:44 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:47.122227 | orchestrator | 2025-09-16 01:04:47 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:47.122891 | orchestrator | 2025-09-16 01:04:47 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:47.124239 | orchestrator | 2025-09-16 01:04:47 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:04:47.125332 | orchestrator | 2025-09-16 01:04:47 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:47.125437 | orchestrator | 2025-09-16 01:04:47 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:50.154795 | orchestrator | 2025-09-16 01:04:50 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:50.154994 | orchestrator | 2025-09-16 01:04:50 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:50.158116 | orchestrator | 2025-09-16 01:04:50 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:04:50.158429 | orchestrator | 2025-09-16 01:04:50 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:50.158575 | orchestrator | 2025-09-16 01:04:50 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:53.198218 | orchestrator | 2025-09-16 01:04:53 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:53.198467 | orchestrator | 2025-09-16 01:04:53 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:53.199149 | orchestrator | 2025-09-16 01:04:53 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:04:53.199806 | orchestrator | 2025-09-16 01:04:53 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:53.199824 | orchestrator | 2025-09-16 01:04:53 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:56.240288 | orchestrator | 2025-09-16 01:04:56 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:56.240893 | orchestrator | 2025-09-16 01:04:56 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:56.241711 | orchestrator | 2025-09-16 01:04:56 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:04:56.243259 | orchestrator | 2025-09-16 01:04:56 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:56.243284 | orchestrator | 2025-09-16 01:04:56 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:04:59.264883 | orchestrator | 2025-09-16 01:04:59 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:04:59.265093 | orchestrator | 2025-09-16 01:04:59 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:04:59.265687 | orchestrator | 2025-09-16 01:04:59 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:04:59.266341 | orchestrator | 2025-09-16 01:04:59 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:04:59.266465 | orchestrator | 2025-09-16 01:04:59 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:02.289035 | orchestrator | 2025-09-16 01:05:02 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:05:02.289134 | orchestrator | 2025-09-16 01:05:02 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:02.289605 | orchestrator | 2025-09-16 01:05:02 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:05:02.290212 | orchestrator | 2025-09-16 01:05:02 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:05:02.291409 | orchestrator | 2025-09-16 01:05:02 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:05.318540 | orchestrator | 2025-09-16 01:05:05 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:05:05.323068 | orchestrator | 2025-09-16 01:05:05 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:05.324433 | orchestrator | 2025-09-16 01:05:05 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:05:05.325594 | orchestrator | 2025-09-16 01:05:05 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:05:05.325630 | orchestrator | 2025-09-16 01:05:05 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:08.354872 | orchestrator | 2025-09-16 01:05:08 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:05:08.355015 | orchestrator | 2025-09-16 01:05:08 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:08.355476 | orchestrator | 2025-09-16 01:05:08 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:05:08.355997 | orchestrator | 2025-09-16 01:05:08 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:05:08.356019 | orchestrator | 2025-09-16 01:05:08 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:11.388250 | orchestrator | 2025-09-16 01:05:11 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:05:11.388787 | orchestrator | 2025-09-16 01:05:11 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:11.390467 | orchestrator | 2025-09-16 01:05:11 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:05:11.390550 | orchestrator | 2025-09-16 01:05:11 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:05:11.390584 | orchestrator | 2025-09-16 01:05:11 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:14.416877 | orchestrator | 2025-09-16 01:05:14 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:05:14.418304 | orchestrator | 2025-09-16 01:05:14 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:14.419912 | orchestrator | 2025-09-16 01:05:14 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:05:14.421424 | orchestrator | 2025-09-16 01:05:14 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:05:14.421671 | orchestrator | 2025-09-16 01:05:14 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:17.463051 | orchestrator | 2025-09-16 01:05:17 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:05:17.463234 | orchestrator | 2025-09-16 01:05:17 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:17.463901 | orchestrator | 2025-09-16 01:05:17 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:05:17.464391 | orchestrator | 2025-09-16 01:05:17 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:05:17.464542 | orchestrator | 2025-09-16 01:05:17 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:20.503421 | orchestrator | 2025-09-16 01:05:20 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:05:20.503520 | orchestrator | 2025-09-16 01:05:20 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:20.503535 | orchestrator | 2025-09-16 01:05:20 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:05:20.503547 | orchestrator | 2025-09-16 01:05:20 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:05:20.503558 | orchestrator | 2025-09-16 01:05:20 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:23.549432 | orchestrator | 2025-09-16 01:05:23 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:05:23.549537 | orchestrator | 2025-09-16 01:05:23 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:23.550322 | orchestrator | 2025-09-16 01:05:23 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:05:23.551299 | orchestrator | 2025-09-16 01:05:23 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state STARTED 2025-09-16 01:05:23.551334 | orchestrator | 2025-09-16 01:05:23 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:26.589916 | orchestrator | 2025-09-16 01:05:26 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:05:26.590784 | orchestrator | 2025-09-16 01:05:26 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:26.591738 | orchestrator | 2025-09-16 01:05:26 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:05:26.596140 | orchestrator | 2025-09-16 01:05:26 | INFO  | Task 79127988-6863-4874-953b-d86a9e46a53e is in state SUCCESS 2025-09-16 01:05:26.597887 | orchestrator | 2025-09-16 01:05:26.597917 | orchestrator | 2025-09-16 01:05:26.597930 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-16 01:05:26.597942 | orchestrator | 2025-09-16 01:05:26.597953 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-16 01:05:26.597964 | orchestrator | Tuesday 16 September 2025 01:03:47 +0000 (0:00:00.073) 0:00:00.073 ***** 2025-09-16 01:05:26.597976 | orchestrator | changed: [localhost] 2025-09-16 01:05:26.597988 | orchestrator | 2025-09-16 01:05:26.597999 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-16 01:05:26.598010 | orchestrator | Tuesday 16 September 2025 01:03:48 +0000 (0:00:00.745) 0:00:00.818 ***** 2025-09-16 01:05:26.598062 | orchestrator | changed: [localhost] 2025-09-16 01:05:26.598074 | orchestrator | 2025-09-16 01:05:26.598085 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-16 01:05:26.598096 | orchestrator | Tuesday 16 September 2025 01:04:20 +0000 (0:00:32.130) 0:00:32.949 ***** 2025-09-16 01:05:26.598107 | orchestrator | changed: [localhost] 2025-09-16 01:05:26.598118 | orchestrator | 2025-09-16 01:05:26.598129 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 01:05:26.598140 | orchestrator | 2025-09-16 01:05:26.598169 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 01:05:26.598181 | orchestrator | Tuesday 16 September 2025 01:04:24 +0000 (0:00:04.060) 0:00:37.009 ***** 2025-09-16 01:05:26.598218 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:05:26.598230 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:05:26.598373 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:05:26.598388 | orchestrator | 2025-09-16 01:05:26.598400 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 01:05:26.598411 | orchestrator | Tuesday 16 September 2025 01:04:25 +0000 (0:00:00.304) 0:00:37.313 ***** 2025-09-16 01:05:26.598422 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-16 01:05:26.598433 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-16 01:05:26.598444 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-16 01:05:26.598455 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-16 01:05:26.598466 | orchestrator | 2025-09-16 01:05:26.598478 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-16 01:05:26.598491 | orchestrator | skipping: no hosts matched 2025-09-16 01:05:26.598505 | orchestrator | 2025-09-16 01:05:26.598518 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 01:05:26.598530 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:05:26.598545 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:05:26.598559 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:05:26.598572 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:05:26.598584 | orchestrator | 2025-09-16 01:05:26.598596 | orchestrator | 2025-09-16 01:05:26.598609 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 01:05:26.598622 | orchestrator | Tuesday 16 September 2025 01:04:25 +0000 (0:00:00.426) 0:00:37.740 ***** 2025-09-16 01:05:26.598635 | orchestrator | =============================================================================== 2025-09-16 01:05:26.598647 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 32.13s 2025-09-16 01:05:26.598659 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.06s 2025-09-16 01:05:26.598671 | orchestrator | Ensure the destination directory exists --------------------------------- 0.75s 2025-09-16 01:05:26.598684 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-09-16 01:05:26.598696 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-09-16 01:05:26.598709 | orchestrator | 2025-09-16 01:05:26.598721 | orchestrator | 2025-09-16 01:05:26.598733 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 01:05:26.598746 | orchestrator | 2025-09-16 01:05:26.598758 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 01:05:26.598770 | orchestrator | Tuesday 16 September 2025 01:01:39 +0000 (0:00:00.237) 0:00:00.237 ***** 2025-09-16 01:05:26.598782 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:05:26.598795 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:05:26.598808 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:05:26.598821 | orchestrator | ok: [testbed-node-3] 2025-09-16 01:05:26.598834 | orchestrator | ok: [testbed-node-4] 2025-09-16 01:05:26.598845 | orchestrator | ok: [testbed-node-5] 2025-09-16 01:05:26.598856 | orchestrator | 2025-09-16 01:05:26.598867 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 01:05:26.598878 | orchestrator | Tuesday 16 September 2025 01:01:40 +0000 (0:00:00.602) 0:00:00.839 ***** 2025-09-16 01:05:26.598889 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-16 01:05:26.598914 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-16 01:05:26.598925 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-16 01:05:26.598945 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-16 01:05:26.598956 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-16 01:05:26.598967 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-16 01:05:26.598978 | orchestrator | 2025-09-16 01:05:26.598988 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-16 01:05:26.598999 | orchestrator | 2025-09-16 01:05:26.599010 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-16 01:05:26.599021 | orchestrator | Tuesday 16 September 2025 01:01:40 +0000 (0:00:00.508) 0:00:01.348 ***** 2025-09-16 01:05:26.599044 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 01:05:26.599056 | orchestrator | 2025-09-16 01:05:26.599066 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-16 01:05:26.599077 | orchestrator | Tuesday 16 September 2025 01:01:41 +0000 (0:00:01.034) 0:00:02.383 ***** 2025-09-16 01:05:26.599088 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:05:26.599098 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:05:26.599109 | orchestrator | ok: [testbed-node-3] 2025-09-16 01:05:26.599119 | orchestrator | ok: [testbed-node-4] 2025-09-16 01:05:26.599130 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:05:26.599140 | orchestrator | ok: [testbed-node-5] 2025-09-16 01:05:26.599170 | orchestrator | 2025-09-16 01:05:26.599181 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-16 01:05:26.599192 | orchestrator | Tuesday 16 September 2025 01:01:42 +0000 (0:00:01.193) 0:00:03.577 ***** 2025-09-16 01:05:26.599203 | orchestrator | ok: [testbed-node-3] 2025-09-16 01:05:26.599213 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:05:26.599224 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:05:26.599234 | orchestrator | ok: [testbed-node-4] 2025-09-16 01:05:26.599245 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:05:26.599256 | orchestrator | ok: [testbed-node-5] 2025-09-16 01:05:26.599266 | orchestrator | 2025-09-16 01:05:26.599277 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-16 01:05:26.599288 | orchestrator | Tuesday 16 September 2025 01:01:44 +0000 (0:00:01.142) 0:00:04.719 ***** 2025-09-16 01:05:26.599298 | orchestrator | ok: [testbed-node-0] => { 2025-09-16 01:05:26.599309 | orchestrator |  "changed": false, 2025-09-16 01:05:26.599320 | orchestrator |  "msg": "All assertions passed" 2025-09-16 01:05:26.599331 | orchestrator | } 2025-09-16 01:05:26.599342 | orchestrator | ok: [testbed-node-1] => { 2025-09-16 01:05:26.599352 | orchestrator |  "changed": false, 2025-09-16 01:05:26.599363 | orchestrator |  "msg": "All assertions passed" 2025-09-16 01:05:26.599374 | orchestrator | } 2025-09-16 01:05:26.599384 | orchestrator | ok: [testbed-node-2] => { 2025-09-16 01:05:26.599395 | orchestrator |  "changed": false, 2025-09-16 01:05:26.599406 | orchestrator |  "msg": "All assertions passed" 2025-09-16 01:05:26.599416 | orchestrator | } 2025-09-16 01:05:26.599427 | orchestrator | ok: [testbed-node-3] => { 2025-09-16 01:05:26.599438 | orchestrator |  "changed": false, 2025-09-16 01:05:26.599448 | orchestrator |  "msg": "All assertions passed" 2025-09-16 01:05:26.599459 | orchestrator | } 2025-09-16 01:05:26.599469 | orchestrator | ok: [testbed-node-4] => { 2025-09-16 01:05:26.599480 | orchestrator |  "changed": false, 2025-09-16 01:05:26.599490 | orchestrator |  "msg": "All assertions passed" 2025-09-16 01:05:26.599501 | orchestrator | } 2025-09-16 01:05:26.599512 | orchestrator | ok: [testbed-node-5] => { 2025-09-16 01:05:26.599522 | orchestrator |  "changed": false, 2025-09-16 01:05:26.599533 | orchestrator |  "msg": "All assertions passed" 2025-09-16 01:05:26.599543 | orchestrator | } 2025-09-16 01:05:26.599554 | orchestrator | 2025-09-16 01:05:26.599565 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-16 01:05:26.599576 | orchestrator | Tuesday 16 September 2025 01:01:44 +0000 (0:00:00.808) 0:00:05.527 ***** 2025-09-16 01:05:26.599594 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.599604 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.599615 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.599625 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.599636 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.599647 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.599657 | orchestrator | 2025-09-16 01:05:26.599668 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-16 01:05:26.599679 | orchestrator | Tuesday 16 September 2025 01:01:45 +0000 (0:00:00.507) 0:00:06.035 ***** 2025-09-16 01:05:26.599690 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-16 01:05:26.599700 | orchestrator | 2025-09-16 01:05:26.599711 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-16 01:05:26.599722 | orchestrator | Tuesday 16 September 2025 01:01:49 +0000 (0:00:03.576) 0:00:09.612 ***** 2025-09-16 01:05:26.599732 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-16 01:05:26.599744 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-16 01:05:26.599754 | orchestrator | 2025-09-16 01:05:26.599765 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-16 01:05:26.599776 | orchestrator | Tuesday 16 September 2025 01:01:54 +0000 (0:00:05.732) 0:00:15.344 ***** 2025-09-16 01:05:26.599787 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-16 01:05:26.599797 | orchestrator | 2025-09-16 01:05:26.599808 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-16 01:05:26.599819 | orchestrator | Tuesday 16 September 2025 01:01:58 +0000 (0:00:03.337) 0:00:18.682 ***** 2025-09-16 01:05:26.599830 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-16 01:05:26.599841 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-16 01:05:26.599852 | orchestrator | 2025-09-16 01:05:26.599862 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-16 01:05:26.599873 | orchestrator | Tuesday 16 September 2025 01:02:02 +0000 (0:00:03.952) 0:00:22.634 ***** 2025-09-16 01:05:26.599883 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-16 01:05:26.599894 | orchestrator | 2025-09-16 01:05:26.599910 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-16 01:05:26.599921 | orchestrator | Tuesday 16 September 2025 01:02:05 +0000 (0:00:03.576) 0:00:26.211 ***** 2025-09-16 01:05:26.599932 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-16 01:05:26.599942 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-16 01:05:26.599953 | orchestrator | 2025-09-16 01:05:26.599964 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-16 01:05:26.599974 | orchestrator | Tuesday 16 September 2025 01:02:13 +0000 (0:00:08.289) 0:00:34.501 ***** 2025-09-16 01:05:26.599985 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.599996 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.600013 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.600024 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.600035 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.600046 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.600056 | orchestrator | 2025-09-16 01:05:26.600067 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-16 01:05:26.600077 | orchestrator | Tuesday 16 September 2025 01:02:14 +0000 (0:00:00.599) 0:00:35.100 ***** 2025-09-16 01:05:26.600088 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.600099 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.600109 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.600120 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.600130 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.600141 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.600175 | orchestrator | 2025-09-16 01:05:26.600187 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-16 01:05:26.600198 | orchestrator | Tuesday 16 September 2025 01:02:16 +0000 (0:00:01.771) 0:00:36.871 ***** 2025-09-16 01:05:26.600209 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:05:26.600219 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:05:26.600230 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:05:26.600241 | orchestrator | ok: [testbed-node-3] 2025-09-16 01:05:26.600252 | orchestrator | ok: [testbed-node-5] 2025-09-16 01:05:26.600262 | orchestrator | ok: [testbed-node-4] 2025-09-16 01:05:26.600273 | orchestrator | 2025-09-16 01:05:26.600283 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-16 01:05:26.600294 | orchestrator | Tuesday 16 September 2025 01:02:18 +0000 (0:00:01.890) 0:00:38.762 ***** 2025-09-16 01:05:26.600305 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.600316 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.600326 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.600337 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.600348 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.600358 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.600369 | orchestrator | 2025-09-16 01:05:26.600380 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-16 01:05:26.600390 | orchestrator | Tuesday 16 September 2025 01:02:20 +0000 (0:00:02.382) 0:00:41.145 ***** 2025-09-16 01:05:26.600405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.600421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.600438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.600479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-16 01:05:26.600492 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-16 01:05:26.600503 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-16 01:05:26.600515 | orchestrator | 2025-09-16 01:05:26.600526 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-16 01:05:26.600537 | orchestrator | Tuesday 16 September 2025 01:02:23 +0000 (0:00:03.057) 0:00:44.203 ***** 2025-09-16 01:05:26.600548 | orchestrator | [WARNING]: Skipped 2025-09-16 01:05:26.600560 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-16 01:05:26.600571 | orchestrator | due to this access issue: 2025-09-16 01:05:26.600582 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-16 01:05:26.600592 | orchestrator | a directory 2025-09-16 01:05:26.600603 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-16 01:05:26.600614 | orchestrator | 2025-09-16 01:05:26.600625 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-16 01:05:26.600636 | orchestrator | Tuesday 16 September 2025 01:02:24 +0000 (0:00:00.948) 0:00:45.151 ***** 2025-09-16 01:05:26.600647 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 01:05:26.600659 | orchestrator | 2025-09-16 01:05:26.600670 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-16 01:05:26.600681 | orchestrator | Tuesday 16 September 2025 01:02:25 +0000 (0:00:01.008) 0:00:46.160 ***** 2025-09-16 01:05:26.600697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.600722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.600734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.600746 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-16 01:05:26.600757 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-16 01:05:26.600779 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-16 01:05:26.600791 | orchestrator | 2025-09-16 01:05:26.600802 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-16 01:05:26.600818 | orchestrator | Tuesday 16 September 2025 01:02:28 +0000 (0:00:02.885) 0:00:49.045 ***** 2025-09-16 01:05:26.600829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.600841 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.600853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.600864 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.600875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.600887 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.600909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.600921 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.600939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.600951 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.600962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.600973 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.600984 | orchestrator | 2025-09-16 01:05:26.600995 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-16 01:05:26.601006 | orchestrator | Tuesday 16 September 2025 01:02:30 +0000 (0:00:02.313) 0:00:51.359 ***** 2025-09-16 01:05:26.601018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.601029 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.601040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.601058 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.601074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.601085 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.601103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.601115 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.601126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.601137 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.601148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.601183 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.601194 | orchestrator | 2025-09-16 01:05:26.601205 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-16 01:05:26.601216 | orchestrator | Tuesday 16 September 2025 01:02:33 +0000 (0:00:02.847) 0:00:54.206 ***** 2025-09-16 01:05:26.601226 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.601237 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.601248 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.601258 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.601269 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.601280 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.601290 | orchestrator | 2025-09-16 01:05:26.601301 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-16 01:05:26.601312 | orchestrator | Tuesday 16 September 2025 01:02:35 +0000 (0:00:01.725) 0:00:55.932 ***** 2025-09-16 01:05:26.601322 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.601333 | orchestrator | 2025-09-16 01:05:26.601344 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-16 01:05:26.601354 | orchestrator | Tuesday 16 September 2025 01:02:35 +0000 (0:00:00.108) 0:00:56.041 ***** 2025-09-16 01:05:26.601365 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.601376 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.601386 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.601397 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.601407 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.601418 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.601428 | orchestrator | 2025-09-16 01:05:26.601439 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-16 01:05:26.601450 | orchestrator | Tuesday 16 September 2025 01:02:36 +0000 (0:00:00.672) 0:00:56.713 ***** 2025-09-16 01:05:26.601861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.601880 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.601891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.601903 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.601914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.601933 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.601944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.601955 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.601970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.601982 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.602001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.602013 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.602054 | orchestrator | 2025-09-16 01:05:26.602065 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-16 01:05:26.602076 | orchestrator | Tuesday 16 September 2025 01:02:38 +0000 (0:00:02.278) 0:00:58.991 ***** 2025-09-16 01:05:26.602087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.602108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.602119 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-16 01:05:26.602136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.602189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-16 01:05:26.602203 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-16 01:05:26.602226 | orchestrator | 2025-09-16 01:05:26.602237 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-16 01:05:26.602248 | orchestrator | Tuesday 16 September 2025 01:02:41 +0000 (0:00:03.432) 0:01:02.424 ***** 2025-09-16 01:05:26.602259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.602271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.602293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.602305 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-16 01:05:26.602323 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-16 01:05:26.602334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-16 01:05:26.602345 | orchestrator | 2025-09-16 01:05:26.602356 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-16 01:05:26.602367 | orchestrator | Tuesday 16 September 2025 01:02:47 +0000 (0:00:05.683) 0:01:08.107 ***** 2025-09-16 01:05:26.602379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.602390 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.602412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.602424 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.602436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.602453 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.602466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.602480 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.602493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.602505 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.602518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.602536 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.602550 | orchestrator | 2025-09-16 01:05:26.602562 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-16 01:05:26.602575 | orchestrator | Tuesday 16 September 2025 01:02:50 +0000 (0:00:02.995) 0:01:11.103 ***** 2025-09-16 01:05:26.602587 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.602600 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.602612 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.602625 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:26.602638 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:05:26.602651 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:05:26.602663 | orchestrator | 2025-09-16 01:05:26.602676 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-16 01:05:26.602699 | orchestrator | Tuesday 16 September 2025 01:02:53 +0000 (0:00:02.929) 0:01:14.033 ***** 2025-09-16 01:05:26.602713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.602726 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.602739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.602752 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.602766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.602779 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.602792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.602816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.602835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.602846 | orchestrator | 2025-09-16 01:05:26.602857 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-16 01:05:26.602868 | orchestrator | Tuesday 16 September 2025 01:02:57 +0000 (0:00:03.726) 0:01:17.759 ***** 2025-09-16 01:05:26.602879 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.602890 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.602900 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.602911 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.602921 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.602932 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.602942 | orchestrator | 2025-09-16 01:05:26.602953 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-16 01:05:26.602964 | orchestrator | Tuesday 16 September 2025 01:02:59 +0000 (0:00:02.672) 0:01:20.432 ***** 2025-09-16 01:05:26.602975 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.602985 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.602996 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.603006 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.603017 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.603027 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.603038 | orchestrator | 2025-09-16 01:05:26.603049 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-16 01:05:26.603059 | orchestrator | Tuesday 16 September 2025 01:03:02 +0000 (0:00:02.818) 0:01:23.250 ***** 2025-09-16 01:05:26.603070 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.603081 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.603091 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.603102 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.603112 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.603123 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.603133 | orchestrator | 2025-09-16 01:05:26.603144 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-16 01:05:26.603174 | orchestrator | Tuesday 16 September 2025 01:03:04 +0000 (0:00:01.778) 0:01:25.029 ***** 2025-09-16 01:05:26.603185 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.603195 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.603206 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.603217 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.603227 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.603238 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.603248 | orchestrator | 2025-09-16 01:05:26.603259 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-16 01:05:26.603277 | orchestrator | Tuesday 16 September 2025 01:03:06 +0000 (0:00:02.163) 0:01:27.193 ***** 2025-09-16 01:05:26.603288 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.603299 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.603309 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.603320 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.603331 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.603341 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.603352 | orchestrator | 2025-09-16 01:05:26.603362 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-16 01:05:26.603373 | orchestrator | Tuesday 16 September 2025 01:03:08 +0000 (0:00:02.213) 0:01:29.406 ***** 2025-09-16 01:05:26.603384 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.603395 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.603405 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.603416 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.603426 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.603437 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.603447 | orchestrator | 2025-09-16 01:05:26.603458 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-16 01:05:26.603469 | orchestrator | Tuesday 16 September 2025 01:03:11 +0000 (0:00:02.720) 0:01:32.126 ***** 2025-09-16 01:05:26.603480 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-16 01:05:26.603495 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.603506 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-16 01:05:26.603517 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.603528 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-16 01:05:26.603539 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.603549 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-16 01:05:26.603560 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.603576 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-16 01:05:26.603587 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.603598 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-16 01:05:26.603609 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.603619 | orchestrator | 2025-09-16 01:05:26.603630 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-16 01:05:26.603641 | orchestrator | Tuesday 16 September 2025 01:03:13 +0000 (0:00:02.215) 0:01:34.342 ***** 2025-09-16 01:05:26.603652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.603664 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.603675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.603693 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.603704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.603715 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.603730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.603742 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.603759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.603771 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.603782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.603799 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.603810 | orchestrator | 2025-09-16 01:05:26.603821 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-16 01:05:26.603831 | orchestrator | Tuesday 16 September 2025 01:03:15 +0000 (0:00:01.988) 0:01:36.331 ***** 2025-09-16 01:05:26.603843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.603854 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.603865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.603876 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.603898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.603910 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.603921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.603941 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.603953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.603964 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.603975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.603986 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.603996 | orchestrator | 2025-09-16 01:05:26.604007 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-16 01:05:26.604018 | orchestrator | Tuesday 16 September 2025 01:03:17 +0000 (0:00:01.552) 0:01:37.883 ***** 2025-09-16 01:05:26.604029 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.604039 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.604050 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.604061 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.604071 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.604082 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.604092 | orchestrator | 2025-09-16 01:05:26.604103 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-16 01:05:26.604114 | orchestrator | Tuesday 16 September 2025 01:03:20 +0000 (0:00:03.257) 0:01:41.141 ***** 2025-09-16 01:05:26.604125 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.604135 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.604146 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.604173 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:05:26.604183 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:05:26.604194 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:05:26.604205 | orchestrator | 2025-09-16 01:05:26.604215 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-16 01:05:26.604226 | orchestrator | Tuesday 16 September 2025 01:03:24 +0000 (0:00:03.592) 0:01:44.733 ***** 2025-09-16 01:05:26.604241 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.604252 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.604263 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.604273 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.604284 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.604294 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.604305 | orchestrator | 2025-09-16 01:05:26.604316 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-16 01:05:26.604327 | orchestrator | Tuesday 16 September 2025 01:03:25 +0000 (0:00:01.840) 0:01:46.574 ***** 2025-09-16 01:05:26.604338 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.604348 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.604365 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.604381 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.604392 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.604403 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.604414 | orchestrator | 2025-09-16 01:05:26.604425 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-16 01:05:26.604436 | orchestrator | Tuesday 16 September 2025 01:03:27 +0000 (0:00:01.854) 0:01:48.428 ***** 2025-09-16 01:05:26.604447 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.604458 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.604468 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.604479 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.604489 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.604500 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.604511 | orchestrator | 2025-09-16 01:05:26.604521 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-16 01:05:26.604532 | orchestrator | Tuesday 16 September 2025 01:03:29 +0000 (0:00:01.827) 0:01:50.256 ***** 2025-09-16 01:05:26.604543 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.604554 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.604564 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.604575 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.604586 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.604596 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.604607 | orchestrator | 2025-09-16 01:05:26.604618 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-16 01:05:26.604629 | orchestrator | Tuesday 16 September 2025 01:03:33 +0000 (0:00:03.709) 0:01:53.965 ***** 2025-09-16 01:05:26.604640 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.604650 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.604661 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.604672 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.604682 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.604693 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.604704 | orchestrator | 2025-09-16 01:05:26.604714 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-16 01:05:26.604725 | orchestrator | Tuesday 16 September 2025 01:03:35 +0000 (0:00:01.785) 0:01:55.750 ***** 2025-09-16 01:05:26.604736 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.604747 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.604758 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.604768 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.604779 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.604790 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.604800 | orchestrator | 2025-09-16 01:05:26.604811 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-16 01:05:26.604822 | orchestrator | Tuesday 16 September 2025 01:03:37 +0000 (0:00:02.150) 0:01:57.900 ***** 2025-09-16 01:05:26.604833 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.604844 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.604855 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.604865 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.604876 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.604887 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.604898 | orchestrator | 2025-09-16 01:05:26.604909 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-16 01:05:26.604920 | orchestrator | Tuesday 16 September 2025 01:03:38 +0000 (0:00:01.673) 0:01:59.574 ***** 2025-09-16 01:05:26.604930 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-16 01:05:26.604942 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.604953 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-16 01:05:26.604970 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.604980 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-16 01:05:26.604991 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.605002 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-16 01:05:26.605013 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.605024 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-16 01:05:26.605035 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.605045 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-16 01:05:26.605056 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.605067 | orchestrator | 2025-09-16 01:05:26.605078 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-16 01:05:26.605089 | orchestrator | Tuesday 16 September 2025 01:03:40 +0000 (0:00:01.978) 0:02:01.552 ***** 2025-09-16 01:05:26.605110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.605123 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.605134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.605145 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.605207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.605220 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.605231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-16 01:05:26.605251 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.605261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.605271 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.605290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-16 01:05:26.605301 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.605311 | orchestrator | 2025-09-16 01:05:26.605320 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-16 01:05:26.605330 | orchestrator | Tuesday 16 September 2025 01:03:43 +0000 (0:00:02.772) 0:02:04.324 ***** 2025-09-16 01:05:26.605339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.605350 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-16 01:05:26.605366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.605381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-16 01:05:26.605398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-16 01:05:26.605408 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-16 01:05:26.605418 | orchestrator | 2025-09-16 01:05:26.605428 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-16 01:05:26.605438 | orchestrator | Tuesday 16 September 2025 01:03:47 +0000 (0:00:03.751) 0:02:08.076 ***** 2025-09-16 01:05:26.605448 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:26.605463 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:26.605473 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:26.605482 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:05:26.605491 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:05:26.605501 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:05:26.605510 | orchestrator | 2025-09-16 01:05:26.605519 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-16 01:05:26.605529 | orchestrator | Tuesday 16 September 2025 01:03:47 +0000 (0:00:00.472) 0:02:08.548 ***** 2025-09-16 01:05:26.605539 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:26.605548 | orchestrator | 2025-09-16 01:05:26.605558 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-16 01:05:26.605567 | orchestrator | Tuesday 16 September 2025 01:03:50 +0000 (0:00:02.147) 0:02:10.696 ***** 2025-09-16 01:05:26.605577 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:26.605586 | orchestrator | 2025-09-16 01:05:26.605595 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-16 01:05:26.605605 | orchestrator | Tuesday 16 September 2025 01:03:52 +0000 (0:00:02.332) 0:02:13.028 ***** 2025-09-16 01:05:26.605615 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:26.605624 | orchestrator | 2025-09-16 01:05:26.605633 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-16 01:05:26.605643 | orchestrator | Tuesday 16 September 2025 01:04:31 +0000 (0:00:38.815) 0:02:51.844 ***** 2025-09-16 01:05:26.605652 | orchestrator | 2025-09-16 01:05:26.605662 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-16 01:05:26.605671 | orchestrator | Tuesday 16 September 2025 01:04:31 +0000 (0:00:00.059) 0:02:51.904 ***** 2025-09-16 01:05:26.605681 | orchestrator | 2025-09-16 01:05:26.605690 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-16 01:05:26.605700 | orchestrator | Tuesday 16 September 2025 01:04:31 +0000 (0:00:00.164) 0:02:52.068 ***** 2025-09-16 01:05:26.605709 | orchestrator | 2025-09-16 01:05:26.605718 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-16 01:05:26.605728 | orchestrator | Tuesday 16 September 2025 01:04:31 +0000 (0:00:00.058) 0:02:52.126 ***** 2025-09-16 01:05:26.605737 | orchestrator | 2025-09-16 01:05:26.605747 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-16 01:05:26.605756 | orchestrator | Tuesday 16 September 2025 01:04:31 +0000 (0:00:00.061) 0:02:52.188 ***** 2025-09-16 01:05:26.605766 | orchestrator | 2025-09-16 01:05:26.605775 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-16 01:05:26.605785 | orchestrator | Tuesday 16 September 2025 01:04:31 +0000 (0:00:00.062) 0:02:52.251 ***** 2025-09-16 01:05:26.605794 | orchestrator | 2025-09-16 01:05:26.605804 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-16 01:05:26.605813 | orchestrator | Tuesday 16 September 2025 01:04:31 +0000 (0:00:00.061) 0:02:52.313 ***** 2025-09-16 01:05:26.605823 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:26.605832 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:05:26.605842 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:05:26.605851 | orchestrator | 2025-09-16 01:05:26.605861 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-16 01:05:26.605870 | orchestrator | Tuesday 16 September 2025 01:04:54 +0000 (0:00:22.655) 0:03:14.968 ***** 2025-09-16 01:05:26.605884 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:05:26.605894 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:05:26.605903 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:05:26.605913 | orchestrator | 2025-09-16 01:05:26.605922 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 01:05:26.605932 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-16 01:05:26.605942 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-16 01:05:26.605963 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-16 01:05:26.605973 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-16 01:05:26.605983 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-16 01:05:26.605992 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-16 01:05:26.606002 | orchestrator | 2025-09-16 01:05:26.606012 | orchestrator | 2025-09-16 01:05:26.606045 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 01:05:26.606055 | orchestrator | Tuesday 16 September 2025 01:05:23 +0000 (0:00:29.160) 0:03:44.128 ***** 2025-09-16 01:05:26.606064 | orchestrator | =============================================================================== 2025-09-16 01:05:26.606074 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 38.82s 2025-09-16 01:05:26.606083 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 29.16s 2025-09-16 01:05:26.606093 | orchestrator | neutron : Restart neutron-server container ----------------------------- 22.66s 2025-09-16 01:05:26.606102 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.29s 2025-09-16 01:05:26.606112 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.73s 2025-09-16 01:05:26.606121 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.68s 2025-09-16 01:05:26.606131 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.95s 2025-09-16 01:05:26.606140 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.75s 2025-09-16 01:05:26.606164 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.73s 2025-09-16 01:05:26.606174 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.71s 2025-09-16 01:05:26.606184 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.59s 2025-09-16 01:05:26.606193 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.58s 2025-09-16 01:05:26.606202 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.58s 2025-09-16 01:05:26.606212 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.43s 2025-09-16 01:05:26.606222 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.34s 2025-09-16 01:05:26.606231 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 3.26s 2025-09-16 01:05:26.606240 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.06s 2025-09-16 01:05:26.606250 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.00s 2025-09-16 01:05:26.606259 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.93s 2025-09-16 01:05:26.606269 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 2.89s 2025-09-16 01:05:26.606278 | orchestrator | 2025-09-16 01:05:26 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:05:26.606288 | orchestrator | 2025-09-16 01:05:26 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:29.629477 | orchestrator | 2025-09-16 01:05:29 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:05:29.629693 | orchestrator | 2025-09-16 01:05:29 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:29.630369 | orchestrator | 2025-09-16 01:05:29 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:05:29.630931 | orchestrator | 2025-09-16 01:05:29 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:05:29.631093 | orchestrator | 2025-09-16 01:05:29 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:32.677430 | orchestrator | 2025-09-16 01:05:32 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:05:32.677980 | orchestrator | 2025-09-16 01:05:32 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:32.678791 | orchestrator | 2025-09-16 01:05:32 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:05:32.681201 | orchestrator | 2025-09-16 01:05:32 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:05:32.681228 | orchestrator | 2025-09-16 01:05:32 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:35.734594 | orchestrator | 2025-09-16 01:05:35 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:05:35.735848 | orchestrator | 2025-09-16 01:05:35 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:35.737604 | orchestrator | 2025-09-16 01:05:35 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:05:35.740125 | orchestrator | 2025-09-16 01:05:35 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:05:35.740258 | orchestrator | 2025-09-16 01:05:35 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:38.785237 | orchestrator | 2025-09-16 01:05:38 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:05:38.788099 | orchestrator | 2025-09-16 01:05:38 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:38.790700 | orchestrator | 2025-09-16 01:05:38 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:05:38.792959 | orchestrator | 2025-09-16 01:05:38 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:05:38.793641 | orchestrator | 2025-09-16 01:05:38 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:41.838578 | orchestrator | 2025-09-16 01:05:41 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:05:41.840874 | orchestrator | 2025-09-16 01:05:41 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:41.843777 | orchestrator | 2025-09-16 01:05:41 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state STARTED 2025-09-16 01:05:41.846327 | orchestrator | 2025-09-16 01:05:41 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:05:41.846634 | orchestrator | 2025-09-16 01:05:41 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:44.887969 | orchestrator | 2025-09-16 01:05:44 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state STARTED 2025-09-16 01:05:44.888413 | orchestrator | 2025-09-16 01:05:44 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:44.889504 | orchestrator | 2025-09-16 01:05:44 | INFO  | Task d09b299e-7f63-46ce-9a11-d6d34b8d6e2a is in state SUCCESS 2025-09-16 01:05:44.891275 | orchestrator | 2025-09-16 01:05:44.891310 | orchestrator | 2025-09-16 01:05:44.891322 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 01:05:44.891334 | orchestrator | 2025-09-16 01:05:44.891344 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 01:05:44.891354 | orchestrator | Tuesday 16 September 2025 01:04:29 +0000 (0:00:00.235) 0:00:00.235 ***** 2025-09-16 01:05:44.891388 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:05:44.891399 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:05:44.891409 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:05:44.891419 | orchestrator | 2025-09-16 01:05:44.891429 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 01:05:44.891439 | orchestrator | Tuesday 16 September 2025 01:04:30 +0000 (0:00:00.253) 0:00:00.488 ***** 2025-09-16 01:05:44.891449 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-16 01:05:44.891459 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-16 01:05:44.891469 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-16 01:05:44.891479 | orchestrator | 2025-09-16 01:05:44.891489 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-16 01:05:44.891499 | orchestrator | 2025-09-16 01:05:44.891509 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-16 01:05:44.891655 | orchestrator | Tuesday 16 September 2025 01:04:30 +0000 (0:00:00.364) 0:00:00.853 ***** 2025-09-16 01:05:44.891670 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:05:44.891680 | orchestrator | 2025-09-16 01:05:44.891690 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-16 01:05:44.891699 | orchestrator | Tuesday 16 September 2025 01:04:31 +0000 (0:00:00.527) 0:00:01.380 ***** 2025-09-16 01:05:44.891709 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-16 01:05:44.891719 | orchestrator | 2025-09-16 01:05:44.891728 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-16 01:05:44.891738 | orchestrator | Tuesday 16 September 2025 01:04:33 +0000 (0:00:02.918) 0:00:04.299 ***** 2025-09-16 01:05:44.891748 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-16 01:05:44.891758 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-16 01:05:44.891767 | orchestrator | 2025-09-16 01:05:44.891790 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-16 01:05:44.891800 | orchestrator | Tuesday 16 September 2025 01:04:40 +0000 (0:00:06.437) 0:00:10.737 ***** 2025-09-16 01:05:44.891810 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-16 01:05:44.891820 | orchestrator | 2025-09-16 01:05:44.891830 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-16 01:05:44.891839 | orchestrator | Tuesday 16 September 2025 01:04:43 +0000 (0:00:03.630) 0:00:14.367 ***** 2025-09-16 01:05:44.891849 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-16 01:05:44.891859 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-16 01:05:44.891868 | orchestrator | 2025-09-16 01:05:44.891878 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-16 01:05:44.891887 | orchestrator | Tuesday 16 September 2025 01:04:47 +0000 (0:00:03.313) 0:00:17.681 ***** 2025-09-16 01:05:44.891897 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-16 01:05:44.891907 | orchestrator | 2025-09-16 01:05:44.891916 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-16 01:05:44.891926 | orchestrator | Tuesday 16 September 2025 01:04:50 +0000 (0:00:03.445) 0:00:21.127 ***** 2025-09-16 01:05:44.891935 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-16 01:05:44.891945 | orchestrator | 2025-09-16 01:05:44.891954 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-16 01:05:44.891964 | orchestrator | Tuesday 16 September 2025 01:04:55 +0000 (0:00:04.350) 0:00:25.477 ***** 2025-09-16 01:05:44.891973 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:44.891983 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:44.891993 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:44.892003 | orchestrator | 2025-09-16 01:05:44.892012 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-16 01:05:44.892031 | orchestrator | Tuesday 16 September 2025 01:04:55 +0000 (0:00:00.407) 0:00:25.884 ***** 2025-09-16 01:05:44.892043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 01:05:44.892070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 01:05:44.892081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 01:05:44.892092 | orchestrator | 2025-09-16 01:05:44.892106 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-16 01:05:44.892116 | orchestrator | Tuesday 16 September 2025 01:04:56 +0000 (0:00:01.357) 0:00:27.241 ***** 2025-09-16 01:05:44.892126 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:44.892135 | orchestrator | 2025-09-16 01:05:44.892145 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-16 01:05:44.892174 | orchestrator | Tuesday 16 September 2025 01:04:56 +0000 (0:00:00.111) 0:00:27.353 ***** 2025-09-16 01:05:44.892184 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:44.892194 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:44.892203 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:44.892213 | orchestrator | 2025-09-16 01:05:44.892223 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-16 01:05:44.892232 | orchestrator | Tuesday 16 September 2025 01:04:57 +0000 (0:00:00.436) 0:00:27.790 ***** 2025-09-16 01:05:44.892242 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:05:44.892261 | orchestrator | 2025-09-16 01:05:44.892270 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-16 01:05:44.892280 | orchestrator | Tuesday 16 September 2025 01:04:58 +0000 (0:00:00.751) 0:00:28.541 ***** 2025-09-16 01:05:44.892290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 01:05:44.892311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 01:05:44.892324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 01:05:44.892336 | orchestrator | 2025-09-16 01:05:44.892347 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-16 01:05:44.892358 | orchestrator | Tuesday 16 September 2025 01:05:00 +0000 (0:00:02.107) 0:00:30.648 ***** 2025-09-16 01:05:44.892374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-16 01:05:44.892394 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:44.892406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-16 01:05:44.892417 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:44.892435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-16 01:05:44.892447 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:44.892457 | orchestrator | 2025-09-16 01:05:44.892468 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-16 01:05:44.892480 | orchestrator | Tuesday 16 September 2025 01:05:02 +0000 (0:00:01.777) 0:00:32.426 ***** 2025-09-16 01:05:44.892491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-16 01:05:44.892503 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:44.892519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-16 01:05:44.892536 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:44.892546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-16 01:05:44.892557 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:44.892566 | orchestrator | 2025-09-16 01:05:44.892576 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-16 01:05:44.892585 | orchestrator | Tuesday 16 September 2025 01:05:03 +0000 (0:00:01.563) 0:00:33.990 ***** 2025-09-16 01:05:44.892600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 01:05:44.892611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 01:05:44.892626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 01:05:44.892644 | orchestrator | 2025-09-16 01:05:44.892654 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-16 01:05:44.892663 | orchestrator | Tuesday 16 September 2025 01:05:05 +0000 (0:00:01.741) 0:00:35.731 ***** 2025-09-16 01:05:44.892673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 01:05:44.892684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 01:05:44.892700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 01:05:44.892711 | orchestrator | 2025-09-16 01:05:44.892721 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-16 01:05:44.892730 | orchestrator | Tuesday 16 September 2025 01:05:07 +0000 (0:00:02.562) 0:00:38.293 ***** 2025-09-16 01:05:44.892740 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-16 01:05:44.892750 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-16 01:05:44.892760 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-16 01:05:44.892769 | orchestrator | 2025-09-16 01:05:44.892779 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-16 01:05:44.892803 | orchestrator | Tuesday 16 September 2025 01:05:09 +0000 (0:00:01.323) 0:00:39.617 ***** 2025-09-16 01:05:44.892812 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:44.892822 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:05:44.892832 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:05:44.892841 | orchestrator | 2025-09-16 01:05:44.892851 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-16 01:05:44.892860 | orchestrator | Tuesday 16 September 2025 01:05:10 +0000 (0:00:01.653) 0:00:41.270 ***** 2025-09-16 01:05:44.892874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-16 01:05:44.892885 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:44.892895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-16 01:05:44.892905 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:44.892921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-16 01:05:44.892931 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:44.892940 | orchestrator | 2025-09-16 01:05:44.892950 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-16 01:05:44.892960 | orchestrator | Tuesday 16 September 2025 01:05:11 +0000 (0:00:01.029) 0:00:42.299 ***** 2025-09-16 01:05:44.892970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 01:05:44.892996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 01:05:44.893007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-16 01:05:44.893017 | orchestrator | 2025-09-16 01:05:44.893027 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-16 01:05:44.893036 | orchestrator | Tuesday 16 September 2025 01:05:13 +0000 (0:00:01.352) 0:00:43.652 ***** 2025-09-16 01:05:44.893046 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:44.893055 | orchestrator | 2025-09-16 01:05:44.893065 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-16 01:05:44.893074 | orchestrator | Tuesday 16 September 2025 01:05:15 +0000 (0:00:02.543) 0:00:46.195 ***** 2025-09-16 01:05:44.893084 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:44.893093 | orchestrator | 2025-09-16 01:05:44.893103 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-16 01:05:44.893112 | orchestrator | Tuesday 16 September 2025 01:05:18 +0000 (0:00:02.309) 0:00:48.505 ***** 2025-09-16 01:05:44.893121 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:44.893131 | orchestrator | 2025-09-16 01:05:44.893141 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-16 01:05:44.893175 | orchestrator | Tuesday 16 September 2025 01:05:32 +0000 (0:00:13.966) 0:01:02.471 ***** 2025-09-16 01:05:44.893185 | orchestrator | 2025-09-16 01:05:44.893194 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-16 01:05:44.893204 | orchestrator | Tuesday 16 September 2025 01:05:32 +0000 (0:00:00.058) 0:01:02.530 ***** 2025-09-16 01:05:44.893213 | orchestrator | 2025-09-16 01:05:44.893229 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-16 01:05:44.893245 | orchestrator | Tuesday 16 September 2025 01:05:32 +0000 (0:00:00.058) 0:01:02.588 ***** 2025-09-16 01:05:44.893255 | orchestrator | 2025-09-16 01:05:44.893264 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-16 01:05:44.893274 | orchestrator | Tuesday 16 September 2025 01:05:32 +0000 (0:00:00.074) 0:01:02.662 ***** 2025-09-16 01:05:44.893283 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:05:44.893293 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:44.893302 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:05:44.893312 | orchestrator | 2025-09-16 01:05:44.893322 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 01:05:44.893333 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 01:05:44.893344 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-16 01:05:44.893354 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-16 01:05:44.893363 | orchestrator | 2025-09-16 01:05:44.893373 | orchestrator | 2025-09-16 01:05:44.893383 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 01:05:44.893392 | orchestrator | Tuesday 16 September 2025 01:05:42 +0000 (0:00:10.448) 0:01:13.111 ***** 2025-09-16 01:05:44.893402 | orchestrator | =============================================================================== 2025-09-16 01:05:44.893411 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.97s 2025-09-16 01:05:44.893421 | orchestrator | placement : Restart placement-api container ---------------------------- 10.45s 2025-09-16 01:05:44.893430 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.44s 2025-09-16 01:05:44.893440 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.35s 2025-09-16 01:05:44.893449 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.63s 2025-09-16 01:05:44.893459 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.45s 2025-09-16 01:05:44.893473 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.31s 2025-09-16 01:05:44.893483 | orchestrator | service-ks-register : placement | Creating services --------------------- 2.92s 2025-09-16 01:05:44.893492 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.56s 2025-09-16 01:05:44.893502 | orchestrator | placement : Creating placement databases -------------------------------- 2.54s 2025-09-16 01:05:44.893511 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.31s 2025-09-16 01:05:44.893521 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.11s 2025-09-16 01:05:44.893530 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.78s 2025-09-16 01:05:44.893540 | orchestrator | placement : Copying over config.json files for services ----------------- 1.74s 2025-09-16 01:05:44.893549 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.65s 2025-09-16 01:05:44.893559 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.56s 2025-09-16 01:05:44.893568 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.36s 2025-09-16 01:05:44.893578 | orchestrator | placement : Check placement containers ---------------------------------- 1.35s 2025-09-16 01:05:44.893587 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.32s 2025-09-16 01:05:44.893597 | orchestrator | placement : Copying over existing policy file --------------------------- 1.03s 2025-09-16 01:05:44.893606 | orchestrator | 2025-09-16 01:05:44 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:05:44.893616 | orchestrator | 2025-09-16 01:05:44 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:05:44.893631 | orchestrator | 2025-09-16 01:05:44 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:47.924580 | orchestrator | 2025-09-16 01:05:47 | INFO  | Task fe6fead6-93bb-4706-970a-8ad836fe2dcb is in state SUCCESS 2025-09-16 01:05:47.925597 | orchestrator | 2025-09-16 01:05:47.925758 | orchestrator | 2025-09-16 01:05:47.925842 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 01:05:47.925859 | orchestrator | 2025-09-16 01:05:47.925871 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 01:05:47.925883 | orchestrator | Tuesday 16 September 2025 01:03:02 +0000 (0:00:00.468) 0:00:00.468 ***** 2025-09-16 01:05:47.925895 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:05:47.925908 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:05:47.925920 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:05:47.925932 | orchestrator | 2025-09-16 01:05:47.925943 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 01:05:47.925958 | orchestrator | Tuesday 16 September 2025 01:03:02 +0000 (0:00:00.359) 0:00:00.827 ***** 2025-09-16 01:05:47.925977 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-16 01:05:47.925993 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-16 01:05:47.926004 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-16 01:05:47.926992 | orchestrator | 2025-09-16 01:05:47.927020 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-16 01:05:47.927032 | orchestrator | 2025-09-16 01:05:47.927043 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-16 01:05:47.927055 | orchestrator | Tuesday 16 September 2025 01:03:03 +0000 (0:00:00.412) 0:00:01.240 ***** 2025-09-16 01:05:47.927066 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:05:47.927337 | orchestrator | 2025-09-16 01:05:47.927350 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-16 01:05:47.927362 | orchestrator | Tuesday 16 September 2025 01:03:03 +0000 (0:00:00.863) 0:00:02.104 ***** 2025-09-16 01:05:47.927374 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-16 01:05:47.927385 | orchestrator | 2025-09-16 01:05:47.927396 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-16 01:05:47.927408 | orchestrator | Tuesday 16 September 2025 01:03:07 +0000 (0:00:03.794) 0:00:05.900 ***** 2025-09-16 01:05:47.927419 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-16 01:05:47.927430 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-16 01:05:47.927441 | orchestrator | 2025-09-16 01:05:47.927452 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-16 01:05:47.927463 | orchestrator | Tuesday 16 September 2025 01:03:14 +0000 (0:00:06.662) 0:00:12.563 ***** 2025-09-16 01:05:47.927474 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-16 01:05:47.927486 | orchestrator | 2025-09-16 01:05:47.927497 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-16 01:05:47.927508 | orchestrator | Tuesday 16 September 2025 01:03:17 +0000 (0:00:03.552) 0:00:16.115 ***** 2025-09-16 01:05:47.927518 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-16 01:05:47.927529 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-16 01:05:47.927540 | orchestrator | 2025-09-16 01:05:47.927551 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-16 01:05:47.927562 | orchestrator | Tuesday 16 September 2025 01:03:22 +0000 (0:00:04.312) 0:00:20.428 ***** 2025-09-16 01:05:47.927573 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-16 01:05:47.927584 | orchestrator | 2025-09-16 01:05:47.927595 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-16 01:05:47.927634 | orchestrator | Tuesday 16 September 2025 01:03:25 +0000 (0:00:03.466) 0:00:23.894 ***** 2025-09-16 01:05:47.927660 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-16 01:05:47.927672 | orchestrator | 2025-09-16 01:05:47.927683 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-16 01:05:47.927694 | orchestrator | Tuesday 16 September 2025 01:03:29 +0000 (0:00:04.242) 0:00:28.136 ***** 2025-09-16 01:05:47.927708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 01:05:47.927766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 01:05:47.927780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 01:05:47.927793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.927806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.927832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.927846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.927888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.927901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.927914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.927927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.927949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.927961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.927976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928078 | orchestrator | 2025-09-16 01:05:47.928092 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-16 01:05:47.928104 | orchestrator | Tuesday 16 September 2025 01:03:34 +0000 (0:00:04.431) 0:00:32.568 ***** 2025-09-16 01:05:47.928117 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:47.928129 | orchestrator | 2025-09-16 01:05:47.928142 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-16 01:05:47.928177 | orchestrator | Tuesday 16 September 2025 01:03:34 +0000 (0:00:00.164) 0:00:32.733 ***** 2025-09-16 01:05:47.928273 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:47.928291 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:47.928311 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:47.928324 | orchestrator | 2025-09-16 01:05:47.928335 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-16 01:05:47.928346 | orchestrator | Tuesday 16 September 2025 01:03:34 +0000 (0:00:00.248) 0:00:32.982 ***** 2025-09-16 01:05:47.928357 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:05:47.928368 | orchestrator | 2025-09-16 01:05:47.928379 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-16 01:05:47.928390 | orchestrator | Tuesday 16 September 2025 01:03:35 +0000 (0:00:00.559) 0:00:33.541 ***** 2025-09-16 01:05:47.928402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 01:05:47.928445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 01:05:47.928459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 01:05:47.928478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.928733 | orchestrator | 2025-09-16 01:05:47.928744 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-16 01:05:47.928755 | orchestrator | Tuesday 16 September 2025 01:03:42 +0000 (0:00:06.985) 0:00:40.527 ***** 2025-09-16 01:05:47.928772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 01:05:47.928784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-16 01:05:47.928822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.928848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.928860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.928871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.928883 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:47.928900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 01:05:47.928912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-16 01:05:47.928950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.928970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.928982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.928993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929004 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:47.929021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 01:05:47.929033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-16 01:05:47.929072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929125 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:47.929136 | orchestrator | 2025-09-16 01:05:47.929147 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-16 01:05:47.929219 | orchestrator | Tuesday 16 September 2025 01:03:43 +0000 (0:00:01.493) 0:00:42.020 ***** 2025-09-16 01:05:47.929237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 01:05:47.929249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-16 01:05:47.929288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929338 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:47.929352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 01:05:47.929363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-16 01:05:47.929398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929446 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:47.929460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 01:05:47.929471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-16 01:05:47.929481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.929554 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:47.929564 | orchestrator | 2025-09-16 01:05:47.929574 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-16 01:05:47.929584 | orchestrator | Tuesday 16 September 2025 01:03:46 +0000 (0:00:02.797) 0:00:44.818 ***** 2025-09-16 01:05:47.929598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 01:05:47.929609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 01:05:47.929652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 01:05:47.929664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.929674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.929684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.929698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.929708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.929748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.929759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.929770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.929780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.929790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.929838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.929850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.929895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.929906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.929917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.929927 | orchestrator | 2025-09-16 01:05:47.929937 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-16 01:05:47.929946 | orchestrator | Tuesday 16 September 2025 01:03:52 +0000 (0:00:05.833) 0:00:50.651 ***** 2025-09-16 01:05:47.929956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 01:05:47.929971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 01:05:47.929988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 01:05:47.930003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930274 | orchestrator | 2025-09-16 01:05:47.930284 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-16 01:05:47.930294 | orchestrator | Tuesday 16 September 2025 01:04:09 +0000 (0:00:16.569) 0:01:07.220 ***** 2025-09-16 01:05:47.930304 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-16 01:05:47.930314 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-16 01:05:47.930323 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-16 01:05:47.930333 | orchestrator | 2025-09-16 01:05:47.930342 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-16 01:05:47.930352 | orchestrator | Tuesday 16 September 2025 01:04:13 +0000 (0:00:04.713) 0:01:11.934 ***** 2025-09-16 01:05:47.930361 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-16 01:05:47.930371 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-16 01:05:47.930381 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-16 01:05:47.930390 | orchestrator | 2025-09-16 01:05:47.930400 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-16 01:05:47.930409 | orchestrator | Tuesday 16 September 2025 01:04:16 +0000 (0:00:02.245) 0:01:14.180 ***** 2025-09-16 01:05:47.930429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 01:05:47.930440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 01:05:47.930456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 01:05:47.930467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930658 | orchestrator | 2025-09-16 01:05:47.930668 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-16 01:05:47.930683 | orchestrator | Tuesday 16 September 2025 01:04:19 +0000 (0:00:02.999) 0:01:17.179 ***** 2025-09-16 01:05:47.930693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 01:05:47.930711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 01:05:47.930721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 01:05:47.930736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.930886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.930926 | orchestrator | 2025-09-16 01:05:47.930936 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-16 01:05:47.930945 | orchestrator | Tuesday 16 September 2025 01:04:21 +0000 (0:00:02.892) 0:01:20.071 ***** 2025-09-16 01:05:47.930955 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:47.930965 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:47.930975 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:47.930984 | orchestrator | 2025-09-16 01:05:47.930994 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-16 01:05:47.931004 | orchestrator | Tuesday 16 September 2025 01:04:22 +0000 (0:00:00.279) 0:01:20.351 ***** 2025-09-16 01:05:47.931014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 01:05:47.931028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-16 01:05:47.931038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.931048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.931065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.931081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.931091 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:47.931101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 01:05:47.931111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-16 01:05:47.931126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.931136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.931165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.931182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.931192 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:47.931202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-16 01:05:47.931212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-16 01:05:47.931227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.931237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.931247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.931268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:05:47.931279 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:47.931289 | orchestrator | 2025-09-16 01:05:47.931299 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-16 01:05:47.931309 | orchestrator | Tuesday 16 September 2025 01:04:23 +0000 (0:00:01.093) 0:01:21.445 ***** 2025-09-16 01:05:47.931319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 01:05:47.931329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 01:05:47.931344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-16 01:05:47.931354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.931375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.931386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-16 01:05:47.931396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.931406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.931420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.931430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.931452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.931463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.931473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.931483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.931493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.931507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.931518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.931538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:05:47.931548 | orchestrator | 2025-09-16 01:05:47.931558 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-16 01:05:47.931568 | orchestrator | Tuesday 16 September 2025 01:04:28 +0000 (0:00:05.320) 0:01:26.765 ***** 2025-09-16 01:05:47.931578 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:05:47.931587 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:05:47.931597 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:05:47.931606 | orchestrator | 2025-09-16 01:05:47.931616 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-16 01:05:47.931625 | orchestrator | Tuesday 16 September 2025 01:04:29 +0000 (0:00:00.395) 0:01:27.160 ***** 2025-09-16 01:05:47.931635 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-16 01:05:47.931645 | orchestrator | 2025-09-16 01:05:47.931654 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-16 01:05:47.931664 | orchestrator | Tuesday 16 September 2025 01:04:30 +0000 (0:00:01.824) 0:01:28.985 ***** 2025-09-16 01:05:47.931674 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-16 01:05:47.931684 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-16 01:05:47.931693 | orchestrator | 2025-09-16 01:05:47.931703 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-16 01:05:47.931713 | orchestrator | Tuesday 16 September 2025 01:04:33 +0000 (0:00:02.197) 0:01:31.182 ***** 2025-09-16 01:05:47.931722 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:47.931732 | orchestrator | 2025-09-16 01:05:47.931741 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-16 01:05:47.931751 | orchestrator | Tuesday 16 September 2025 01:04:47 +0000 (0:00:14.487) 0:01:45.669 ***** 2025-09-16 01:05:47.931760 | orchestrator | 2025-09-16 01:05:47.931770 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-16 01:05:47.931779 | orchestrator | Tuesday 16 September 2025 01:04:47 +0000 (0:00:00.263) 0:01:45.933 ***** 2025-09-16 01:05:47.931789 | orchestrator | 2025-09-16 01:05:47.931798 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-16 01:05:47.931808 | orchestrator | Tuesday 16 September 2025 01:04:47 +0000 (0:00:00.065) 0:01:45.999 ***** 2025-09-16 01:05:47.931817 | orchestrator | 2025-09-16 01:05:47.931827 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-16 01:05:47.931837 | orchestrator | Tuesday 16 September 2025 01:04:47 +0000 (0:00:00.067) 0:01:46.067 ***** 2025-09-16 01:05:47.931846 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:05:47.931856 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:47.931865 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:05:47.931875 | orchestrator | 2025-09-16 01:05:47.931884 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-16 01:05:47.931894 | orchestrator | Tuesday 16 September 2025 01:05:00 +0000 (0:00:12.373) 0:01:58.440 ***** 2025-09-16 01:05:47.931903 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:47.931913 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:05:47.931922 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:05:47.931937 | orchestrator | 2025-09-16 01:05:47.931947 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-16 01:05:47.931957 | orchestrator | Tuesday 16 September 2025 01:05:09 +0000 (0:00:09.021) 0:02:07.462 ***** 2025-09-16 01:05:47.931966 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:47.931976 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:05:47.931985 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:05:47.931995 | orchestrator | 2025-09-16 01:05:47.932004 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-16 01:05:47.932018 | orchestrator | Tuesday 16 September 2025 01:05:15 +0000 (0:00:06.392) 0:02:13.854 ***** 2025-09-16 01:05:47.932028 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:47.932038 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:05:47.932047 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:05:47.932057 | orchestrator | 2025-09-16 01:05:47.932066 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-16 01:05:47.932076 | orchestrator | Tuesday 16 September 2025 01:05:21 +0000 (0:00:05.709) 0:02:19.564 ***** 2025-09-16 01:05:47.932085 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:47.932095 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:05:47.932104 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:05:47.932114 | orchestrator | 2025-09-16 01:05:47.932124 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-16 01:05:47.932133 | orchestrator | Tuesday 16 September 2025 01:05:27 +0000 (0:00:06.020) 0:02:25.584 ***** 2025-09-16 01:05:47.932143 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:47.932166 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:05:47.932176 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:05:47.932186 | orchestrator | 2025-09-16 01:05:47.932196 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-16 01:05:47.932205 | orchestrator | Tuesday 16 September 2025 01:05:38 +0000 (0:00:11.235) 0:02:36.820 ***** 2025-09-16 01:05:47.932215 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:05:47.932224 | orchestrator | 2025-09-16 01:05:47.932234 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 01:05:47.932244 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-16 01:05:47.932255 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 01:05:47.932265 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 01:05:47.932274 | orchestrator | 2025-09-16 01:05:47.932284 | orchestrator | 2025-09-16 01:05:47.932299 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 01:05:47.932309 | orchestrator | Tuesday 16 September 2025 01:05:46 +0000 (0:00:07.596) 0:02:44.416 ***** 2025-09-16 01:05:47.932319 | orchestrator | =============================================================================== 2025-09-16 01:05:47.932328 | orchestrator | designate : Copying over designate.conf -------------------------------- 16.57s 2025-09-16 01:05:47.932338 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.49s 2025-09-16 01:05:47.932347 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.37s 2025-09-16 01:05:47.932357 | orchestrator | designate : Restart designate-worker container ------------------------- 11.24s 2025-09-16 01:05:47.932367 | orchestrator | designate : Restart designate-api container ----------------------------- 9.02s 2025-09-16 01:05:47.932376 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.60s 2025-09-16 01:05:47.932386 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.99s 2025-09-16 01:05:47.932395 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.67s 2025-09-16 01:05:47.932410 | orchestrator | designate : Restart designate-central container ------------------------- 6.39s 2025-09-16 01:05:47.932420 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.02s 2025-09-16 01:05:47.932430 | orchestrator | designate : Copying over config.json files for services ----------------- 5.83s 2025-09-16 01:05:47.932439 | orchestrator | designate : Restart designate-producer container ------------------------ 5.71s 2025-09-16 01:05:47.932449 | orchestrator | designate : Check designate containers ---------------------------------- 5.32s 2025-09-16 01:05:47.932458 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.71s 2025-09-16 01:05:47.932468 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.43s 2025-09-16 01:05:47.932478 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.31s 2025-09-16 01:05:47.932487 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.24s 2025-09-16 01:05:47.932497 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.80s 2025-09-16 01:05:47.932507 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.55s 2025-09-16 01:05:47.932516 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.47s 2025-09-16 01:05:47.932526 | orchestrator | 2025-09-16 01:05:47 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:47.932536 | orchestrator | 2025-09-16 01:05:47 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:05:47.932546 | orchestrator | 2025-09-16 01:05:47 | INFO  | Task 74bbc52a-570e-457d-9378-5fe6f9951d5e is in state STARTED 2025-09-16 01:05:47.932556 | orchestrator | 2025-09-16 01:05:47 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:05:47.932566 | orchestrator | 2025-09-16 01:05:47 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:50.979622 | orchestrator | 2025-09-16 01:05:50 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:50.979720 | orchestrator | 2025-09-16 01:05:50 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:05:50.980366 | orchestrator | 2025-09-16 01:05:50 | INFO  | Task 74bbc52a-570e-457d-9378-5fe6f9951d5e is in state STARTED 2025-09-16 01:05:50.981046 | orchestrator | 2025-09-16 01:05:50 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:05:50.981263 | orchestrator | 2025-09-16 01:05:50 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:54.035873 | orchestrator | 2025-09-16 01:05:54 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:54.037587 | orchestrator | 2025-09-16 01:05:54 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:05:54.037728 | orchestrator | 2025-09-16 01:05:54 | INFO  | Task 74bbc52a-570e-457d-9378-5fe6f9951d5e is in state SUCCESS 2025-09-16 01:05:54.038928 | orchestrator | 2025-09-16 01:05:54 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:05:54.040732 | orchestrator | 2025-09-16 01:05:54 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:05:54.040824 | orchestrator | 2025-09-16 01:05:54 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:05:57.087308 | orchestrator | 2025-09-16 01:05:57 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:05:57.088351 | orchestrator | 2025-09-16 01:05:57 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:05:57.089476 | orchestrator | 2025-09-16 01:05:57 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:05:57.090334 | orchestrator | 2025-09-16 01:05:57 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:05:57.090473 | orchestrator | 2025-09-16 01:05:57 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:00.132652 | orchestrator | 2025-09-16 01:06:00 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:00.133591 | orchestrator | 2025-09-16 01:06:00 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:00.134824 | orchestrator | 2025-09-16 01:06:00 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:00.136123 | orchestrator | 2025-09-16 01:06:00 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:00.136146 | orchestrator | 2025-09-16 01:06:00 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:03.177305 | orchestrator | 2025-09-16 01:06:03 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:03.177799 | orchestrator | 2025-09-16 01:06:03 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:03.179095 | orchestrator | 2025-09-16 01:06:03 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:03.179939 | orchestrator | 2025-09-16 01:06:03 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:03.179963 | orchestrator | 2025-09-16 01:06:03 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:06.216691 | orchestrator | 2025-09-16 01:06:06 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:06.219452 | orchestrator | 2025-09-16 01:06:06 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:06.222448 | orchestrator | 2025-09-16 01:06:06 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:06.224776 | orchestrator | 2025-09-16 01:06:06 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:06.224795 | orchestrator | 2025-09-16 01:06:06 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:09.283018 | orchestrator | 2025-09-16 01:06:09 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:09.283607 | orchestrator | 2025-09-16 01:06:09 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:09.284548 | orchestrator | 2025-09-16 01:06:09 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:09.285783 | orchestrator | 2025-09-16 01:06:09 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:09.285807 | orchestrator | 2025-09-16 01:06:09 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:12.389762 | orchestrator | 2025-09-16 01:06:12 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:12.391635 | orchestrator | 2025-09-16 01:06:12 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:12.392407 | orchestrator | 2025-09-16 01:06:12 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:12.394727 | orchestrator | 2025-09-16 01:06:12 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:12.394752 | orchestrator | 2025-09-16 01:06:12 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:15.446540 | orchestrator | 2025-09-16 01:06:15 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:15.449601 | orchestrator | 2025-09-16 01:06:15 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:15.452017 | orchestrator | 2025-09-16 01:06:15 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:15.454663 | orchestrator | 2025-09-16 01:06:15 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:15.455262 | orchestrator | 2025-09-16 01:06:15 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:18.494089 | orchestrator | 2025-09-16 01:06:18 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:18.494510 | orchestrator | 2025-09-16 01:06:18 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:18.496042 | orchestrator | 2025-09-16 01:06:18 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:18.496345 | orchestrator | 2025-09-16 01:06:18 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:18.496367 | orchestrator | 2025-09-16 01:06:18 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:21.544139 | orchestrator | 2025-09-16 01:06:21 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:21.545258 | orchestrator | 2025-09-16 01:06:21 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:21.546675 | orchestrator | 2025-09-16 01:06:21 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:21.547950 | orchestrator | 2025-09-16 01:06:21 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:21.548147 | orchestrator | 2025-09-16 01:06:21 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:24.588232 | orchestrator | 2025-09-16 01:06:24 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:24.588891 | orchestrator | 2025-09-16 01:06:24 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:24.591054 | orchestrator | 2025-09-16 01:06:24 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:24.592966 | orchestrator | 2025-09-16 01:06:24 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:24.593403 | orchestrator | 2025-09-16 01:06:24 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:27.638491 | orchestrator | 2025-09-16 01:06:27 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:27.640123 | orchestrator | 2025-09-16 01:06:27 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:27.642204 | orchestrator | 2025-09-16 01:06:27 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:27.643909 | orchestrator | 2025-09-16 01:06:27 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:27.644032 | orchestrator | 2025-09-16 01:06:27 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:30.681823 | orchestrator | 2025-09-16 01:06:30 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:30.682150 | orchestrator | 2025-09-16 01:06:30 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:30.683779 | orchestrator | 2025-09-16 01:06:30 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:30.685111 | orchestrator | 2025-09-16 01:06:30 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:30.685449 | orchestrator | 2025-09-16 01:06:30 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:33.730655 | orchestrator | 2025-09-16 01:06:33 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:33.731531 | orchestrator | 2025-09-16 01:06:33 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:33.733260 | orchestrator | 2025-09-16 01:06:33 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:33.735557 | orchestrator | 2025-09-16 01:06:33 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:33.736439 | orchestrator | 2025-09-16 01:06:33 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:36.778522 | orchestrator | 2025-09-16 01:06:36 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:36.782117 | orchestrator | 2025-09-16 01:06:36 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:36.784733 | orchestrator | 2025-09-16 01:06:36 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:36.788051 | orchestrator | 2025-09-16 01:06:36 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:36.788081 | orchestrator | 2025-09-16 01:06:36 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:39.832311 | orchestrator | 2025-09-16 01:06:39 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:39.834094 | orchestrator | 2025-09-16 01:06:39 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:39.835277 | orchestrator | 2025-09-16 01:06:39 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:39.836584 | orchestrator | 2025-09-16 01:06:39 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:39.836824 | orchestrator | 2025-09-16 01:06:39 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:42.877897 | orchestrator | 2025-09-16 01:06:42 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:42.879482 | orchestrator | 2025-09-16 01:06:42 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:42.881067 | orchestrator | 2025-09-16 01:06:42 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:42.882831 | orchestrator | 2025-09-16 01:06:42 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:42.882855 | orchestrator | 2025-09-16 01:06:42 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:45.921320 | orchestrator | 2025-09-16 01:06:45 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:45.922529 | orchestrator | 2025-09-16 01:06:45 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:45.923775 | orchestrator | 2025-09-16 01:06:45 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:45.925190 | orchestrator | 2025-09-16 01:06:45 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:45.925507 | orchestrator | 2025-09-16 01:06:45 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:48.966979 | orchestrator | 2025-09-16 01:06:48 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:48.968257 | orchestrator | 2025-09-16 01:06:48 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:48.969553 | orchestrator | 2025-09-16 01:06:48 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:48.971273 | orchestrator | 2025-09-16 01:06:48 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:48.971392 | orchestrator | 2025-09-16 01:06:48 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:52.018945 | orchestrator | 2025-09-16 01:06:52 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:52.021750 | orchestrator | 2025-09-16 01:06:52 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:52.024274 | orchestrator | 2025-09-16 01:06:52 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:52.028036 | orchestrator | 2025-09-16 01:06:52 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:52.028063 | orchestrator | 2025-09-16 01:06:52 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:55.080149 | orchestrator | 2025-09-16 01:06:55 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:55.081564 | orchestrator | 2025-09-16 01:06:55 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:55.083784 | orchestrator | 2025-09-16 01:06:55 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:55.085985 | orchestrator | 2025-09-16 01:06:55 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:55.086008 | orchestrator | 2025-09-16 01:06:55 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:06:58.123555 | orchestrator | 2025-09-16 01:06:58 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:06:58.124257 | orchestrator | 2025-09-16 01:06:58 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:06:58.125769 | orchestrator | 2025-09-16 01:06:58 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:06:58.128079 | orchestrator | 2025-09-16 01:06:58 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:06:58.128309 | orchestrator | 2025-09-16 01:06:58 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:01.173386 | orchestrator | 2025-09-16 01:07:01 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:01.174977 | orchestrator | 2025-09-16 01:07:01 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:01.175988 | orchestrator | 2025-09-16 01:07:01 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:01.177821 | orchestrator | 2025-09-16 01:07:01 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:07:01.178087 | orchestrator | 2025-09-16 01:07:01 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:04.233860 | orchestrator | 2025-09-16 01:07:04 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:04.233964 | orchestrator | 2025-09-16 01:07:04 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:04.233979 | orchestrator | 2025-09-16 01:07:04 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:04.235653 | orchestrator | 2025-09-16 01:07:04 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:07:04.235747 | orchestrator | 2025-09-16 01:07:04 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:07.283765 | orchestrator | 2025-09-16 01:07:07 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:07.285205 | orchestrator | 2025-09-16 01:07:07 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:07.286791 | orchestrator | 2025-09-16 01:07:07 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:07.288391 | orchestrator | 2025-09-16 01:07:07 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:07:07.288414 | orchestrator | 2025-09-16 01:07:07 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:10.320307 | orchestrator | 2025-09-16 01:07:10 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:10.322465 | orchestrator | 2025-09-16 01:07:10 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:10.322487 | orchestrator | 2025-09-16 01:07:10 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:10.322867 | orchestrator | 2025-09-16 01:07:10 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:07:10.323007 | orchestrator | 2025-09-16 01:07:10 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:13.354238 | orchestrator | 2025-09-16 01:07:13 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:13.356341 | orchestrator | 2025-09-16 01:07:13 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:13.358499 | orchestrator | 2025-09-16 01:07:13 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:13.361258 | orchestrator | 2025-09-16 01:07:13 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:07:13.361426 | orchestrator | 2025-09-16 01:07:13 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:16.401435 | orchestrator | 2025-09-16 01:07:16 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:16.402702 | orchestrator | 2025-09-16 01:07:16 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:16.404462 | orchestrator | 2025-09-16 01:07:16 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:16.406008 | orchestrator | 2025-09-16 01:07:16 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:07:16.406075 | orchestrator | 2025-09-16 01:07:16 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:19.450312 | orchestrator | 2025-09-16 01:07:19 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:19.451874 | orchestrator | 2025-09-16 01:07:19 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:19.454841 | orchestrator | 2025-09-16 01:07:19 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:19.457800 | orchestrator | 2025-09-16 01:07:19 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:07:19.458103 | orchestrator | 2025-09-16 01:07:19 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:22.508809 | orchestrator | 2025-09-16 01:07:22 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:22.510705 | orchestrator | 2025-09-16 01:07:22 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:22.512499 | orchestrator | 2025-09-16 01:07:22 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:22.514618 | orchestrator | 2025-09-16 01:07:22 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state STARTED 2025-09-16 01:07:22.514655 | orchestrator | 2025-09-16 01:07:22 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:25.554421 | orchestrator | 2025-09-16 01:07:25 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:25.555408 | orchestrator | 2025-09-16 01:07:25 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:25.556343 | orchestrator | 2025-09-16 01:07:25 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:25.557917 | orchestrator | 2025-09-16 01:07:25 | INFO  | Task 3f02715b-4578-4296-a759-0cdd76401036 is in state SUCCESS 2025-09-16 01:07:25.558181 | orchestrator | 2025-09-16 01:07:25.558208 | orchestrator | 2025-09-16 01:07:25.558219 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 01:07:25.558230 | orchestrator | 2025-09-16 01:07:25.558240 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 01:07:25.558250 | orchestrator | Tuesday 16 September 2025 01:05:50 +0000 (0:00:00.181) 0:00:00.181 ***** 2025-09-16 01:07:25.558260 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:07:25.558271 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:07:25.558280 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:07:25.558290 | orchestrator | 2025-09-16 01:07:25.558299 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 01:07:25.558309 | orchestrator | Tuesday 16 September 2025 01:05:51 +0000 (0:00:00.294) 0:00:00.476 ***** 2025-09-16 01:07:25.558319 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-16 01:07:25.558329 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-16 01:07:25.558338 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-16 01:07:25.558347 | orchestrator | 2025-09-16 01:07:25.558357 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-16 01:07:25.558367 | orchestrator | 2025-09-16 01:07:25.558376 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-16 01:07:25.558385 | orchestrator | Tuesday 16 September 2025 01:05:51 +0000 (0:00:00.582) 0:00:01.059 ***** 2025-09-16 01:07:25.558395 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:07:25.558405 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:07:25.558414 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:07:25.558423 | orchestrator | 2025-09-16 01:07:25.558433 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 01:07:25.558443 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:07:25.558454 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:07:25.558464 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:07:25.558473 | orchestrator | 2025-09-16 01:07:25.558483 | orchestrator | 2025-09-16 01:07:25.558493 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 01:07:25.558502 | orchestrator | Tuesday 16 September 2025 01:05:52 +0000 (0:00:00.823) 0:00:01.883 ***** 2025-09-16 01:07:25.558512 | orchestrator | =============================================================================== 2025-09-16 01:07:25.558521 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.83s 2025-09-16 01:07:25.558531 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-09-16 01:07:25.558541 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-09-16 01:07:25.558550 | orchestrator | 2025-09-16 01:07:25.559930 | orchestrator | 2025-09-16 01:07:25.559963 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 01:07:25.559973 | orchestrator | 2025-09-16 01:07:25.559982 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 01:07:25.559992 | orchestrator | Tuesday 16 September 2025 01:05:28 +0000 (0:00:00.458) 0:00:00.458 ***** 2025-09-16 01:07:25.560017 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:07:25.560043 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:07:25.560053 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:07:25.560063 | orchestrator | 2025-09-16 01:07:25.560073 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 01:07:25.560082 | orchestrator | Tuesday 16 September 2025 01:05:29 +0000 (0:00:00.469) 0:00:00.927 ***** 2025-09-16 01:07:25.560092 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-16 01:07:25.560102 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-16 01:07:25.560112 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-16 01:07:25.560122 | orchestrator | 2025-09-16 01:07:25.560132 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-16 01:07:25.560142 | orchestrator | 2025-09-16 01:07:25.560226 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-16 01:07:25.560236 | orchestrator | Tuesday 16 September 2025 01:05:29 +0000 (0:00:00.604) 0:00:01.532 ***** 2025-09-16 01:07:25.560467 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:07:25.560485 | orchestrator | 2025-09-16 01:07:25.560495 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-16 01:07:25.560505 | orchestrator | Tuesday 16 September 2025 01:05:30 +0000 (0:00:00.475) 0:00:02.008 ***** 2025-09-16 01:07:25.560515 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-16 01:07:25.560524 | orchestrator | 2025-09-16 01:07:25.560534 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-16 01:07:25.560543 | orchestrator | Tuesday 16 September 2025 01:05:33 +0000 (0:00:03.423) 0:00:05.432 ***** 2025-09-16 01:07:25.560553 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-16 01:07:25.560563 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-16 01:07:25.560572 | orchestrator | 2025-09-16 01:07:25.560582 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-16 01:07:25.560591 | orchestrator | Tuesday 16 September 2025 01:05:40 +0000 (0:00:06.953) 0:00:12.385 ***** 2025-09-16 01:07:25.560601 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-16 01:07:25.560610 | orchestrator | 2025-09-16 01:07:25.560620 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-16 01:07:25.560629 | orchestrator | Tuesday 16 September 2025 01:05:44 +0000 (0:00:03.646) 0:00:16.031 ***** 2025-09-16 01:07:25.560638 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-16 01:07:25.560648 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-16 01:07:25.560658 | orchestrator | 2025-09-16 01:07:25.560667 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-16 01:07:25.560677 | orchestrator | Tuesday 16 September 2025 01:05:48 +0000 (0:00:04.109) 0:00:20.141 ***** 2025-09-16 01:07:25.560686 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-16 01:07:25.560695 | orchestrator | 2025-09-16 01:07:25.560705 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-16 01:07:25.560714 | orchestrator | Tuesday 16 September 2025 01:05:52 +0000 (0:00:03.739) 0:00:23.881 ***** 2025-09-16 01:07:25.560724 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-16 01:07:25.560734 | orchestrator | 2025-09-16 01:07:25.560743 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-16 01:07:25.560753 | orchestrator | Tuesday 16 September 2025 01:05:56 +0000 (0:00:04.389) 0:00:28.271 ***** 2025-09-16 01:07:25.560762 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:07:25.560772 | orchestrator | 2025-09-16 01:07:25.560781 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-16 01:07:25.560790 | orchestrator | Tuesday 16 September 2025 01:06:00 +0000 (0:00:03.648) 0:00:31.919 ***** 2025-09-16 01:07:25.560800 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:07:25.560830 | orchestrator | 2025-09-16 01:07:25.560839 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-16 01:07:25.560849 | orchestrator | Tuesday 16 September 2025 01:06:04 +0000 (0:00:04.183) 0:00:36.102 ***** 2025-09-16 01:07:25.560858 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:07:25.560868 | orchestrator | 2025-09-16 01:07:25.560877 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-16 01:07:25.560887 | orchestrator | Tuesday 16 September 2025 01:06:08 +0000 (0:00:03.994) 0:00:40.097 ***** 2025-09-16 01:07:25.560911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.560932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.560943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.560954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.560970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.560988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.560998 | orchestrator | 2025-09-16 01:07:25.561008 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-16 01:07:25.561018 | orchestrator | Tuesday 16 September 2025 01:06:10 +0000 (0:00:01.890) 0:00:41.988 ***** 2025-09-16 01:07:25.561032 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:07:25.561042 | orchestrator | 2025-09-16 01:07:25.561052 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-16 01:07:25.561061 | orchestrator | Tuesday 16 September 2025 01:06:10 +0000 (0:00:00.169) 0:00:42.158 ***** 2025-09-16 01:07:25.561071 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:07:25.561081 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:07:25.561090 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:07:25.561103 | orchestrator | 2025-09-16 01:07:25.561114 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-16 01:07:25.561125 | orchestrator | Tuesday 16 September 2025 01:06:10 +0000 (0:00:00.538) 0:00:42.696 ***** 2025-09-16 01:07:25.561137 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-16 01:07:25.561147 | orchestrator | 2025-09-16 01:07:25.561190 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-16 01:07:25.561202 | orchestrator | Tuesday 16 September 2025 01:06:12 +0000 (0:00:01.379) 0:00:44.075 ***** 2025-09-16 01:07:25.561214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.561227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.561245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.561271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.561284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.561296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.561308 | orchestrator | 2025-09-16 01:07:25.561320 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-16 01:07:25.561331 | orchestrator | Tuesday 16 September 2025 01:06:15 +0000 (0:00:02.823) 0:00:46.899 ***** 2025-09-16 01:07:25.561348 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:07:25.561360 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:07:25.561372 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:07:25.561383 | orchestrator | 2025-09-16 01:07:25.561393 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-16 01:07:25.561404 | orchestrator | Tuesday 16 September 2025 01:06:15 +0000 (0:00:00.301) 0:00:47.200 ***** 2025-09-16 01:07:25.561416 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:07:25.561427 | orchestrator | 2025-09-16 01:07:25.561440 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-16 01:07:25.561451 | orchestrator | Tuesday 16 September 2025 01:06:16 +0000 (0:00:00.813) 0:00:48.014 ***** 2025-09-16 01:07:25.561461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.561477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.561491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.561502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.561518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.561528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.561538 | orchestrator | 2025-09-16 01:07:25.561548 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-16 01:07:25.561557 | orchestrator | Tuesday 16 September 2025 01:06:19 +0000 (0:00:02.807) 0:00:50.822 ***** 2025-09-16 01:07:25.561574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-16 01:07:25.561589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:07:25.561599 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:07:25.561610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-16 01:07:25.561629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:07:25.561639 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:07:25.561650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-16 01:07:25.561666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:07:25.561677 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:07:25.561686 | orchestrator | 2025-09-16 01:07:25.561706 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-16 01:07:25.561716 | orchestrator | Tuesday 16 September 2025 01:06:19 +0000 (0:00:00.547) 0:00:51.369 ***** 2025-09-16 01:07:25.561726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-16 01:07:25.561742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:07:25.561753 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:07:25.561763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-16 01:07:25.561773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:07:25.561783 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:07:25.561804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-16 01:07:25.561815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:07:25.561832 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:07:25.561842 | orchestrator | 2025-09-16 01:07:25.561851 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-16 01:07:25.561861 | orchestrator | Tuesday 16 September 2025 01:06:20 +0000 (0:00:00.996) 0:00:52.366 ***** 2025-09-16 01:07:25.561871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.561881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.561896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.561911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.561927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.561937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.561947 | orchestrator | 2025-09-16 01:07:25.561957 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-16 01:07:25.561966 | orchestrator | Tuesday 16 September 2025 01:06:23 +0000 (0:00:02.548) 0:00:54.914 ***** 2025-09-16 01:07:25.561976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.561991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.562006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.562066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.562080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.562090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.562100 | orchestrator | 2025-09-16 01:07:25.562110 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-16 01:07:25.562120 | orchestrator | Tuesday 16 September 2025 01:06:28 +0000 (0:00:04.957) 0:00:59.872 ***** 2025-09-16 01:07:25.562137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-16 01:07:25.562174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:07:25.562185 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:07:25.562195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-16 01:07:25.562205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:07:25.562215 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:07:25.562225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-16 01:07:25.562241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:07:25.562261 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:07:25.562272 | orchestrator | 2025-09-16 01:07:25.562281 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-16 01:07:25.562291 | orchestrator | Tuesday 16 September 2025 01:06:28 +0000 (0:00:00.587) 0:01:00.460 ***** 2025-09-16 01:07:25.562301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.562312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.562322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-16 01:07:25.562332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.562357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.562368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:07:25.562377 | orchestrator | 2025-09-16 01:07:25.562387 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-16 01:07:25.562397 | orchestrator | Tuesday 16 September 2025 01:06:30 +0000 (0:00:02.108) 0:01:02.568 ***** 2025-09-16 01:07:25.562407 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:07:25.562417 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:07:25.562427 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:07:25.562436 | orchestrator | 2025-09-16 01:07:25.562447 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-16 01:07:25.562456 | orchestrator | Tuesday 16 September 2025 01:06:31 +0000 (0:00:00.271) 0:01:02.840 ***** 2025-09-16 01:07:25.562466 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:07:25.562475 | orchestrator | 2025-09-16 01:07:25.562485 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-16 01:07:25.562495 | orchestrator | Tuesday 16 September 2025 01:06:33 +0000 (0:00:02.180) 0:01:05.020 ***** 2025-09-16 01:07:25.562504 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:07:25.562514 | orchestrator | 2025-09-16 01:07:25.562524 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-16 01:07:25.562534 | orchestrator | Tuesday 16 September 2025 01:06:35 +0000 (0:00:02.364) 0:01:07.385 ***** 2025-09-16 01:07:25.562543 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:07:25.562553 | orchestrator | 2025-09-16 01:07:25.562563 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-16 01:07:25.562572 | orchestrator | Tuesday 16 September 2025 01:06:50 +0000 (0:00:15.150) 0:01:22.536 ***** 2025-09-16 01:07:25.562582 | orchestrator | 2025-09-16 01:07:25.562592 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-16 01:07:25.562601 | orchestrator | Tuesday 16 September 2025 01:06:50 +0000 (0:00:00.084) 0:01:22.620 ***** 2025-09-16 01:07:25.562611 | orchestrator | 2025-09-16 01:07:25.562621 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-16 01:07:25.562630 | orchestrator | Tuesday 16 September 2025 01:06:50 +0000 (0:00:00.079) 0:01:22.700 ***** 2025-09-16 01:07:25.562640 | orchestrator | 2025-09-16 01:07:25.562650 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-16 01:07:25.562659 | orchestrator | Tuesday 16 September 2025 01:06:51 +0000 (0:00:00.082) 0:01:22.782 ***** 2025-09-16 01:07:25.562669 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:07:25.562685 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:07:25.562695 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:07:25.562704 | orchestrator | 2025-09-16 01:07:25.562714 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-16 01:07:25.562723 | orchestrator | Tuesday 16 September 2025 01:07:09 +0000 (0:00:18.869) 0:01:41.651 ***** 2025-09-16 01:07:25.562732 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:07:25.562742 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:07:25.562752 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:07:25.562761 | orchestrator | 2025-09-16 01:07:25.562771 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 01:07:25.562781 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-16 01:07:25.562791 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-16 01:07:25.562801 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-16 01:07:25.562810 | orchestrator | 2025-09-16 01:07:25.562820 | orchestrator | 2025-09-16 01:07:25.562830 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 01:07:25.562839 | orchestrator | Tuesday 16 September 2025 01:07:24 +0000 (0:00:14.291) 0:01:55.943 ***** 2025-09-16 01:07:25.562848 | orchestrator | =============================================================================== 2025-09-16 01:07:25.562858 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.87s 2025-09-16 01:07:25.562873 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.15s 2025-09-16 01:07:25.562883 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.29s 2025-09-16 01:07:25.562892 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.95s 2025-09-16 01:07:25.562909 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.96s 2025-09-16 01:07:25.562919 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.39s 2025-09-16 01:07:25.562929 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.18s 2025-09-16 01:07:25.562938 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.11s 2025-09-16 01:07:25.562948 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.99s 2025-09-16 01:07:25.562957 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.74s 2025-09-16 01:07:25.562967 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.65s 2025-09-16 01:07:25.562976 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.65s 2025-09-16 01:07:25.562986 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.42s 2025-09-16 01:07:25.562995 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.82s 2025-09-16 01:07:25.563004 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.81s 2025-09-16 01:07:25.563014 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.55s 2025-09-16 01:07:25.563023 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.36s 2025-09-16 01:07:25.563033 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.18s 2025-09-16 01:07:25.563042 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.11s 2025-09-16 01:07:25.563052 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.89s 2025-09-16 01:07:25.563061 | orchestrator | 2025-09-16 01:07:25 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:28.587927 | orchestrator | 2025-09-16 01:07:28 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:28.589897 | orchestrator | 2025-09-16 01:07:28 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:28.590759 | orchestrator | 2025-09-16 01:07:28 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:28.590793 | orchestrator | 2025-09-16 01:07:28 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:31.615625 | orchestrator | 2025-09-16 01:07:31 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:31.617052 | orchestrator | 2025-09-16 01:07:31 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:31.617430 | orchestrator | 2025-09-16 01:07:31 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:31.617458 | orchestrator | 2025-09-16 01:07:31 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:34.652364 | orchestrator | 2025-09-16 01:07:34 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:34.654084 | orchestrator | 2025-09-16 01:07:34 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:34.654766 | orchestrator | 2025-09-16 01:07:34 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:34.654797 | orchestrator | 2025-09-16 01:07:34 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:37.684247 | orchestrator | 2025-09-16 01:07:37 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:37.684363 | orchestrator | 2025-09-16 01:07:37 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:37.685156 | orchestrator | 2025-09-16 01:07:37 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:37.685204 | orchestrator | 2025-09-16 01:07:37 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:40.733906 | orchestrator | 2025-09-16 01:07:40 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:40.736686 | orchestrator | 2025-09-16 01:07:40 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:40.739386 | orchestrator | 2025-09-16 01:07:40 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:40.739479 | orchestrator | 2025-09-16 01:07:40 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:43.780084 | orchestrator | 2025-09-16 01:07:43 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:43.782203 | orchestrator | 2025-09-16 01:07:43 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:43.784222 | orchestrator | 2025-09-16 01:07:43 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:43.784248 | orchestrator | 2025-09-16 01:07:43 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:46.829246 | orchestrator | 2025-09-16 01:07:46 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:46.830830 | orchestrator | 2025-09-16 01:07:46 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:46.833077 | orchestrator | 2025-09-16 01:07:46 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:46.833300 | orchestrator | 2025-09-16 01:07:46 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:49.885016 | orchestrator | 2025-09-16 01:07:49 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:49.886577 | orchestrator | 2025-09-16 01:07:49 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:49.888193 | orchestrator | 2025-09-16 01:07:49 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:49.888223 | orchestrator | 2025-09-16 01:07:49 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:52.939063 | orchestrator | 2025-09-16 01:07:52 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:52.940072 | orchestrator | 2025-09-16 01:07:52 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:52.941485 | orchestrator | 2025-09-16 01:07:52 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:52.941508 | orchestrator | 2025-09-16 01:07:52 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:55.989853 | orchestrator | 2025-09-16 01:07:55 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:55.992937 | orchestrator | 2025-09-16 01:07:55 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:55.994620 | orchestrator | 2025-09-16 01:07:55 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:55.994813 | orchestrator | 2025-09-16 01:07:55 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:07:59.045572 | orchestrator | 2025-09-16 01:07:59 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:07:59.047692 | orchestrator | 2025-09-16 01:07:59 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:07:59.050418 | orchestrator | 2025-09-16 01:07:59 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:07:59.050446 | orchestrator | 2025-09-16 01:07:59 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:02.094325 | orchestrator | 2025-09-16 01:08:02 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:08:02.095526 | orchestrator | 2025-09-16 01:08:02 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:08:02.097043 | orchestrator | 2025-09-16 01:08:02 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:02.097273 | orchestrator | 2025-09-16 01:08:02 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:05.137864 | orchestrator | 2025-09-16 01:08:05 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:08:05.138657 | orchestrator | 2025-09-16 01:08:05 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state STARTED 2025-09-16 01:08:05.141977 | orchestrator | 2025-09-16 01:08:05 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:05.142136 | orchestrator | 2025-09-16 01:08:05 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:08.176994 | orchestrator | 2025-09-16 01:08:08 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:08:08.178623 | orchestrator | 2025-09-16 01:08:08 | INFO  | Task b3433733-3ef8-423a-9bc4-d489a5cffa45 is in state SUCCESS 2025-09-16 01:08:08.180759 | orchestrator | 2025-09-16 01:08:08.180793 | orchestrator | 2025-09-16 01:08:08.180806 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 01:08:08.180819 | orchestrator | 2025-09-16 01:08:08.180830 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 01:08:08.180842 | orchestrator | Tuesday 16 September 2025 01:05:47 +0000 (0:00:00.291) 0:00:00.291 ***** 2025-09-16 01:08:08.180853 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:08:08.180865 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:08:08.180900 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:08:08.180912 | orchestrator | 2025-09-16 01:08:08.180923 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 01:08:08.180934 | orchestrator | Tuesday 16 September 2025 01:05:48 +0000 (0:00:00.316) 0:00:00.608 ***** 2025-09-16 01:08:08.180945 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-16 01:08:08.180956 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-16 01:08:08.180967 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-16 01:08:08.180977 | orchestrator | 2025-09-16 01:08:08.181004 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-16 01:08:08.181133 | orchestrator | 2025-09-16 01:08:08.181149 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-16 01:08:08.181557 | orchestrator | Tuesday 16 September 2025 01:05:48 +0000 (0:00:00.455) 0:00:01.063 ***** 2025-09-16 01:08:08.181576 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:08:08.181588 | orchestrator | 2025-09-16 01:08:08.181599 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-16 01:08:08.181610 | orchestrator | Tuesday 16 September 2025 01:05:49 +0000 (0:00:00.621) 0:00:01.685 ***** 2025-09-16 01:08:08.181625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 01:08:08.181640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 01:08:08.181653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 01:08:08.181665 | orchestrator | 2025-09-16 01:08:08.181676 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-16 01:08:08.181687 | orchestrator | Tuesday 16 September 2025 01:05:50 +0000 (0:00:01.091) 0:00:02.776 ***** 2025-09-16 01:08:08.181698 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-16 01:08:08.181764 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-16 01:08:08.181781 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-16 01:08:08.181792 | orchestrator | 2025-09-16 01:08:08.181875 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-16 01:08:08.181899 | orchestrator | Tuesday 16 September 2025 01:05:51 +0000 (0:00:00.798) 0:00:03.574 ***** 2025-09-16 01:08:08.181911 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:08:08.182345 | orchestrator | 2025-09-16 01:08:08.182371 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-16 01:08:08.182382 | orchestrator | Tuesday 16 September 2025 01:05:51 +0000 (0:00:00.667) 0:00:04.242 ***** 2025-09-16 01:08:08.182434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 01:08:08.182457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 01:08:08.182470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 01:08:08.182481 | orchestrator | 2025-09-16 01:08:08.182492 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-16 01:08:08.182503 | orchestrator | Tuesday 16 September 2025 01:05:53 +0000 (0:00:01.373) 0:00:05.616 ***** 2025-09-16 01:08:08.182514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-16 01:08:08.182526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-16 01:08:08.182548 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:08.182559 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:08.182599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-16 01:08:08.182612 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:08.182646 | orchestrator | 2025-09-16 01:08:08.182657 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-16 01:08:08.182668 | orchestrator | Tuesday 16 September 2025 01:05:53 +0000 (0:00:00.350) 0:00:05.966 ***** 2025-09-16 01:08:08.182686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-16 01:08:08.182698 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:08.182709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-16 01:08:08.182720 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:08.182731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-16 01:08:08.182742 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:08.182753 | orchestrator | 2025-09-16 01:08:08.182764 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-16 01:08:08.182775 | orchestrator | Tuesday 16 September 2025 01:05:54 +0000 (0:00:00.906) 0:00:06.873 ***** 2025-09-16 01:08:08.182786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 01:08:08.182810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 01:08:08.182853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 01:08:08.182867 | orchestrator | 2025-09-16 01:08:08.182878 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-16 01:08:08.182889 | orchestrator | Tuesday 16 September 2025 01:05:55 +0000 (0:00:01.415) 0:00:08.289 ***** 2025-09-16 01:08:08.182905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 01:08:08.182917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 01:08:08.182929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 01:08:08.182947 | orchestrator | 2025-09-16 01:08:08.182962 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-16 01:08:08.182974 | orchestrator | Tuesday 16 September 2025 01:05:57 +0000 (0:00:01.390) 0:00:09.679 ***** 2025-09-16 01:08:08.182987 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:08.183000 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:08.183012 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:08.183025 | orchestrator | 2025-09-16 01:08:08.183038 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-16 01:08:08.183051 | orchestrator | Tuesday 16 September 2025 01:05:57 +0000 (0:00:00.485) 0:00:10.164 ***** 2025-09-16 01:08:08.183064 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-16 01:08:08.183077 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-16 01:08:08.183091 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-16 01:08:08.183103 | orchestrator | 2025-09-16 01:08:08.183116 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-16 01:08:08.183129 | orchestrator | Tuesday 16 September 2025 01:05:58 +0000 (0:00:01.285) 0:00:11.450 ***** 2025-09-16 01:08:08.183141 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-16 01:08:08.183154 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-16 01:08:08.183191 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-16 01:08:08.183204 | orchestrator | 2025-09-16 01:08:08.183217 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-16 01:08:08.183230 | orchestrator | Tuesday 16 September 2025 01:06:00 +0000 (0:00:01.223) 0:00:12.674 ***** 2025-09-16 01:08:08.183274 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-16 01:08:08.183289 | orchestrator | 2025-09-16 01:08:08.183303 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-16 01:08:08.183314 | orchestrator | Tuesday 16 September 2025 01:06:00 +0000 (0:00:00.704) 0:00:13.378 ***** 2025-09-16 01:08:08.183325 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-16 01:08:08.183336 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-16 01:08:08.183347 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:08:08.183358 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:08:08.183369 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:08:08.183379 | orchestrator | 2025-09-16 01:08:08.183391 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-16 01:08:08.183401 | orchestrator | Tuesday 16 September 2025 01:06:01 +0000 (0:00:00.670) 0:00:14.048 ***** 2025-09-16 01:08:08.183412 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:08.183423 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:08.183439 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:08.183451 | orchestrator | 2025-09-16 01:08:08.183461 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-16 01:08:08.183472 | orchestrator | Tuesday 16 September 2025 01:06:02 +0000 (0:00:00.463) 0:00:14.512 ***** 2025-09-16 01:08:08.183484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1058085, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3671846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1058085, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3671846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1058085, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3671846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1058119, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3808086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1058119, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3808086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1058119, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3808086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1058088, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3688946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1058088, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3688946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1058088, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3688946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1058125, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3836482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1058125, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3836482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1058125, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3836482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1058101, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3738947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1058101, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3738947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1058101, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3738947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1058111, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3790767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1058111, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3790767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1058111, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3790767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1058084, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3670132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1058084, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3670132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1058084, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3670132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1058086, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3678946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1058086, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3678946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1058086, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3678946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1058090, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3696713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1058090, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3696713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1058090, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3696713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1058104, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3748946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.183989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1058104, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3748946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1058104, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3748946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1058118, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3801122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1058118, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3801122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1058118, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3801122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1058087, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3685913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1058087, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3685913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1058087, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3685913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1058109, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3778946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1058109, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3778946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1058109, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3778946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1058102, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3748946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1058102, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3748946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1058102, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3748946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1058097, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.373643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1058097, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.373643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1058097, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.373643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1058095, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3718946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1058095, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3718946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1058095, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3718946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1058105, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3768947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1058105, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3768947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1058105, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3768947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1058092, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3708947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1058092, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3708947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1058092, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3708947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1058116, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.379444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1058116, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.379444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1058116, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.379444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1058225, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4218955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1058225, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4218955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1058225, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4218955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1058167, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.400895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1058167, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.400895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1058167, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.400895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1058151, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3858948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1058151, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3858948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1058151, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3858948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1058186, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.403507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1058186, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.403507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1058186, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.403507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1058139, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3849142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1058139, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3849142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1058139, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3849142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1058203, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4131389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1058203, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4131389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1058203, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4131389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1058188, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4108953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1058188, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4108953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1058188, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4108953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1058205, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4139524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1058205, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4139524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1058205, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4139524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1058222, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4198954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1058222, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4198954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1058222, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4198954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1058202, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4118953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1058202, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4118953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1058202, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4118953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1058183, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4028122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1058183, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4028122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1058183, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4028122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1058164, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.393895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1058164, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.393895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1058164, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.393895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1058182, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.401895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1058182, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.401895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1058182, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.401895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.184993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1058154, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3934283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1058154, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3934283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1058154, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3934283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1058185, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4028952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1058185, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4028952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1058185, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4028952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1058214, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4189973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1058214, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4189973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1058214, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4189973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1058210, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4158952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1058210, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4158952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1058210, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4158952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1058143, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3858879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1058143, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3858879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1058143, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3858879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1058147, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3858948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1058147, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3858948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1058147, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.3858948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1058200, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4118953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1058200, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4118953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1058200, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4118953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1058208, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4148953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1058208, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4148953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1058208, 'dev': 148, 'nlink': 1, 'atime': 1757980975.0, 'mtime': 1757980975.0, 'ctime': 1757981841.4148953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-16 01:08:08.185329 | orchestrator | 2025-09-16 01:08:08.185339 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-16 01:08:08.185349 | orchestrator | Tuesday 16 September 2025 01:06:40 +0000 (0:00:38.885) 0:00:53.397 ***** 2025-09-16 01:08:08.185359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 01:08:08.185375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 01:08:08.185385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-16 01:08:08.185395 | orchestrator | 2025-09-16 01:08:08.185404 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-16 01:08:08.185414 | orchestrator | Tuesday 16 September 2025 01:06:42 +0000 (0:00:01.174) 0:00:54.572 ***** 2025-09-16 01:08:08.185424 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:08.185434 | orchestrator | 2025-09-16 01:08:08.185443 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-16 01:08:08.185453 | orchestrator | Tuesday 16 September 2025 01:06:44 +0000 (0:00:02.542) 0:00:57.114 ***** 2025-09-16 01:08:08.185462 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:08.185472 | orchestrator | 2025-09-16 01:08:08.185481 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-16 01:08:08.185491 | orchestrator | Tuesday 16 September 2025 01:06:47 +0000 (0:00:02.561) 0:00:59.676 ***** 2025-09-16 01:08:08.185501 | orchestrator | 2025-09-16 01:08:08.185510 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-16 01:08:08.185525 | orchestrator | Tuesday 16 September 2025 01:06:47 +0000 (0:00:00.084) 0:00:59.761 ***** 2025-09-16 01:08:08.185535 | orchestrator | 2025-09-16 01:08:08.185544 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-16 01:08:08.185554 | orchestrator | Tuesday 16 September 2025 01:06:47 +0000 (0:00:00.076) 0:00:59.838 ***** 2025-09-16 01:08:08.185564 | orchestrator | 2025-09-16 01:08:08.185573 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-16 01:08:08.185583 | orchestrator | Tuesday 16 September 2025 01:06:47 +0000 (0:00:00.280) 0:01:00.119 ***** 2025-09-16 01:08:08.185593 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:08.185603 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:08.185612 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:08.185622 | orchestrator | 2025-09-16 01:08:08.185632 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-16 01:08:08.185641 | orchestrator | Tuesday 16 September 2025 01:06:49 +0000 (0:00:02.038) 0:01:02.157 ***** 2025-09-16 01:08:08.185651 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:08.185665 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:08.185675 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-16 01:08:08.185685 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-16 01:08:08.185701 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-16 01:08:08.185711 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:08:08.185720 | orchestrator | 2025-09-16 01:08:08.185730 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-16 01:08:08.185740 | orchestrator | Tuesday 16 September 2025 01:07:28 +0000 (0:00:38.764) 0:01:40.922 ***** 2025-09-16 01:08:08.185749 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:08.185759 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:08:08.185769 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:08:08.185778 | orchestrator | 2025-09-16 01:08:08.185788 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-16 01:08:08.185798 | orchestrator | Tuesday 16 September 2025 01:08:01 +0000 (0:00:33.393) 0:02:14.315 ***** 2025-09-16 01:08:08.185807 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:08:08.185817 | orchestrator | 2025-09-16 01:08:08.185827 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-16 01:08:08.185836 | orchestrator | Tuesday 16 September 2025 01:08:04 +0000 (0:00:02.315) 0:02:16.631 ***** 2025-09-16 01:08:08.185846 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:08.185856 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:08.185865 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:08.185875 | orchestrator | 2025-09-16 01:08:08.185884 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-16 01:08:08.185894 | orchestrator | Tuesday 16 September 2025 01:08:04 +0000 (0:00:00.469) 0:02:17.101 ***** 2025-09-16 01:08:08.185905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-16 01:08:08.185917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-16 01:08:08.185928 | orchestrator | 2025-09-16 01:08:08.185937 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-16 01:08:08.185947 | orchestrator | Tuesday 16 September 2025 01:08:07 +0000 (0:00:02.484) 0:02:19.585 ***** 2025-09-16 01:08:08.185956 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:08.185966 | orchestrator | 2025-09-16 01:08:08.185976 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 01:08:08.185985 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-16 01:08:08.185996 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-16 01:08:08.186006 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-16 01:08:08.186057 | orchestrator | 2025-09-16 01:08:08.186069 | orchestrator | 2025-09-16 01:08:08.186079 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 01:08:08.186089 | orchestrator | Tuesday 16 September 2025 01:08:07 +0000 (0:00:00.255) 0:02:19.840 ***** 2025-09-16 01:08:08.186099 | orchestrator | =============================================================================== 2025-09-16 01:08:08.186108 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.89s 2025-09-16 01:08:08.186118 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.76s 2025-09-16 01:08:08.186127 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 33.39s 2025-09-16 01:08:08.186143 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.56s 2025-09-16 01:08:08.186152 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.54s 2025-09-16 01:08:08.186213 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.48s 2025-09-16 01:08:08.186225 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.32s 2025-09-16 01:08:08.186234 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.04s 2025-09-16 01:08:08.186244 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.42s 2025-09-16 01:08:08.186254 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.39s 2025-09-16 01:08:08.186264 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.37s 2025-09-16 01:08:08.186273 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.29s 2025-09-16 01:08:08.186283 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.22s 2025-09-16 01:08:08.186292 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.17s 2025-09-16 01:08:08.186307 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.09s 2025-09-16 01:08:08.186317 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.91s 2025-09-16 01:08:08.186326 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.80s 2025-09-16 01:08:08.186336 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.70s 2025-09-16 01:08:08.186346 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.67s 2025-09-16 01:08:08.186355 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.67s 2025-09-16 01:08:08.186365 | orchestrator | 2025-09-16 01:08:08 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:08.186375 | orchestrator | 2025-09-16 01:08:08 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:11.220699 | orchestrator | 2025-09-16 01:08:11 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state STARTED 2025-09-16 01:08:11.223236 | orchestrator | 2025-09-16 01:08:11 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:11.223271 | orchestrator | 2025-09-16 01:08:11 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:14.269052 | orchestrator | 2025-09-16 01:08:14 | INFO  | Task e86396fc-d217-49ea-a941-84e988c68282 is in state SUCCESS 2025-09-16 01:08:14.271889 | orchestrator | 2025-09-16 01:08:14.271935 | orchestrator | 2025-09-16 01:08:14.271948 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 01:08:14.271961 | orchestrator | 2025-09-16 01:08:14.271972 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-16 01:08:14.271984 | orchestrator | Tuesday 16 September 2025 00:59:37 +0000 (0:00:00.260) 0:00:00.260 ***** 2025-09-16 01:08:14.271995 | orchestrator | changed: [testbed-manager] 2025-09-16 01:08:14.272008 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.272020 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:08:14.272030 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:08:14.272041 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:08:14.272052 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:08:14.272062 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:08:14.272073 | orchestrator | 2025-09-16 01:08:14.272084 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 01:08:14.272095 | orchestrator | Tuesday 16 September 2025 00:59:38 +0000 (0:00:00.687) 0:00:00.948 ***** 2025-09-16 01:08:14.272152 | orchestrator | changed: [testbed-manager] 2025-09-16 01:08:14.272188 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.272200 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:08:14.272211 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:08:14.272243 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:08:14.272315 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:08:14.272327 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:08:14.272447 | orchestrator | 2025-09-16 01:08:14.272463 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 01:08:14.272474 | orchestrator | Tuesday 16 September 2025 00:59:39 +0000 (0:00:00.586) 0:00:01.534 ***** 2025-09-16 01:08:14.272487 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-16 01:08:14.272501 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-16 01:08:14.272514 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-16 01:08:14.272527 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-16 01:08:14.272539 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-16 01:08:14.272551 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-16 01:08:14.272564 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-16 01:08:14.272576 | orchestrator | 2025-09-16 01:08:14.272589 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-16 01:08:14.272601 | orchestrator | 2025-09-16 01:08:14.272614 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-16 01:08:14.272627 | orchestrator | Tuesday 16 September 2025 00:59:39 +0000 (0:00:00.706) 0:00:02.240 ***** 2025-09-16 01:08:14.272640 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:08:14.272652 | orchestrator | 2025-09-16 01:08:14.272665 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-16 01:08:14.272678 | orchestrator | Tuesday 16 September 2025 00:59:40 +0000 (0:00:00.690) 0:00:02.931 ***** 2025-09-16 01:08:14.272690 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-16 01:08:14.272704 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-16 01:08:14.272717 | orchestrator | 2025-09-16 01:08:14.272730 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-16 01:08:14.273179 | orchestrator | Tuesday 16 September 2025 00:59:44 +0000 (0:00:03.787) 0:00:06.718 ***** 2025-09-16 01:08:14.273505 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-16 01:08:14.273556 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-16 01:08:14.273567 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.273579 | orchestrator | 2025-09-16 01:08:14.273590 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-16 01:08:14.273601 | orchestrator | Tuesday 16 September 2025 00:59:48 +0000 (0:00:04.410) 0:00:11.129 ***** 2025-09-16 01:08:14.273612 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.273623 | orchestrator | 2025-09-16 01:08:14.273633 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-16 01:08:14.273645 | orchestrator | Tuesday 16 September 2025 00:59:49 +0000 (0:00:00.646) 0:00:11.775 ***** 2025-09-16 01:08:14.274555 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.274587 | orchestrator | 2025-09-16 01:08:14.274601 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-16 01:08:14.274642 | orchestrator | Tuesday 16 September 2025 00:59:50 +0000 (0:00:01.498) 0:00:13.274 ***** 2025-09-16 01:08:14.274654 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.274665 | orchestrator | 2025-09-16 01:08:14.274676 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-16 01:08:14.274687 | orchestrator | Tuesday 16 September 2025 00:59:54 +0000 (0:00:03.624) 0:00:16.898 ***** 2025-09-16 01:08:14.274698 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.274709 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.274719 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.274730 | orchestrator | 2025-09-16 01:08:14.274741 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-16 01:08:14.274752 | orchestrator | Tuesday 16 September 2025 00:59:54 +0000 (0:00:00.435) 0:00:17.333 ***** 2025-09-16 01:08:14.274792 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:08:14.274805 | orchestrator | 2025-09-16 01:08:14.274816 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-16 01:08:14.274827 | orchestrator | Tuesday 16 September 2025 01:00:26 +0000 (0:00:31.115) 0:00:48.449 ***** 2025-09-16 01:08:14.274838 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.274848 | orchestrator | 2025-09-16 01:08:14.274859 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-16 01:08:14.274870 | orchestrator | Tuesday 16 September 2025 01:00:42 +0000 (0:00:16.425) 0:01:04.874 ***** 2025-09-16 01:08:14.274881 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:08:14.274891 | orchestrator | 2025-09-16 01:08:14.274902 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-16 01:08:14.274913 | orchestrator | Tuesday 16 September 2025 01:00:55 +0000 (0:00:12.754) 0:01:17.629 ***** 2025-09-16 01:08:14.275237 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:08:14.275260 | orchestrator | 2025-09-16 01:08:14.275273 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-16 01:08:14.275285 | orchestrator | Tuesday 16 September 2025 01:00:56 +0000 (0:00:01.097) 0:01:18.727 ***** 2025-09-16 01:08:14.275296 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.275308 | orchestrator | 2025-09-16 01:08:14.275328 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-16 01:08:14.275339 | orchestrator | Tuesday 16 September 2025 01:00:56 +0000 (0:00:00.496) 0:01:19.223 ***** 2025-09-16 01:08:14.275352 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:08:14.275364 | orchestrator | 2025-09-16 01:08:14.275375 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-16 01:08:14.275386 | orchestrator | Tuesday 16 September 2025 01:00:57 +0000 (0:00:00.477) 0:01:19.701 ***** 2025-09-16 01:08:14.275397 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:08:14.275408 | orchestrator | 2025-09-16 01:08:14.275419 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-16 01:08:14.275430 | orchestrator | Tuesday 16 September 2025 01:01:15 +0000 (0:00:17.753) 0:01:37.454 ***** 2025-09-16 01:08:14.275441 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.275452 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.275463 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.275474 | orchestrator | 2025-09-16 01:08:14.275485 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-16 01:08:14.275495 | orchestrator | 2025-09-16 01:08:14.275506 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-16 01:08:14.275518 | orchestrator | Tuesday 16 September 2025 01:01:15 +0000 (0:00:00.298) 0:01:37.752 ***** 2025-09-16 01:08:14.275547 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:08:14.275586 | orchestrator | 2025-09-16 01:08:14.275609 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-16 01:08:14.275627 | orchestrator | Tuesday 16 September 2025 01:01:15 +0000 (0:00:00.543) 0:01:38.296 ***** 2025-09-16 01:08:14.275644 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.275663 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.275682 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.275700 | orchestrator | 2025-09-16 01:08:14.275718 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-16 01:08:14.275731 | orchestrator | Tuesday 16 September 2025 01:01:18 +0000 (0:00:02.256) 0:01:40.552 ***** 2025-09-16 01:08:14.275742 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.275755 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.275768 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.275781 | orchestrator | 2025-09-16 01:08:14.275794 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-16 01:08:14.275822 | orchestrator | Tuesday 16 September 2025 01:01:20 +0000 (0:00:02.103) 0:01:42.655 ***** 2025-09-16 01:08:14.275834 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.275847 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.275860 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.275872 | orchestrator | 2025-09-16 01:08:14.275885 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-16 01:08:14.275898 | orchestrator | Tuesday 16 September 2025 01:01:20 +0000 (0:00:00.303) 0:01:42.959 ***** 2025-09-16 01:08:14.275911 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-16 01:08:14.275923 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.275936 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-16 01:08:14.275949 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.275962 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-16 01:08:14.275975 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-16 01:08:14.275988 | orchestrator | 2025-09-16 01:08:14.276000 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-16 01:08:14.276013 | orchestrator | Tuesday 16 September 2025 01:01:28 +0000 (0:00:08.310) 0:01:51.270 ***** 2025-09-16 01:08:14.276026 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.276038 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.276051 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.276064 | orchestrator | 2025-09-16 01:08:14.276078 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-16 01:08:14.276099 | orchestrator | Tuesday 16 September 2025 01:01:29 +0000 (0:00:00.350) 0:01:51.620 ***** 2025-09-16 01:08:14.276110 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-16 01:08:14.276121 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.276132 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-16 01:08:14.276143 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.276154 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-16 01:08:14.276191 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.276202 | orchestrator | 2025-09-16 01:08:14.276213 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-16 01:08:14.276224 | orchestrator | Tuesday 16 September 2025 01:01:29 +0000 (0:00:00.712) 0:01:52.333 ***** 2025-09-16 01:08:14.276235 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.276245 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.276256 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.276266 | orchestrator | 2025-09-16 01:08:14.276277 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-16 01:08:14.276288 | orchestrator | Tuesday 16 September 2025 01:01:30 +0000 (0:00:00.492) 0:01:52.826 ***** 2025-09-16 01:08:14.276299 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.276310 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.276320 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.276331 | orchestrator | 2025-09-16 01:08:14.276341 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-16 01:08:14.276352 | orchestrator | Tuesday 16 September 2025 01:01:31 +0000 (0:00:01.218) 0:01:54.045 ***** 2025-09-16 01:08:14.276363 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.276374 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.276500 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.276525 | orchestrator | 2025-09-16 01:08:14.276537 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-16 01:08:14.276547 | orchestrator | Tuesday 16 September 2025 01:01:33 +0000 (0:00:02.223) 0:01:56.268 ***** 2025-09-16 01:08:14.276558 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.276569 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.276580 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:08:14.276591 | orchestrator | 2025-09-16 01:08:14.276602 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-16 01:08:14.276622 | orchestrator | Tuesday 16 September 2025 01:01:53 +0000 (0:00:19.628) 0:02:15.896 ***** 2025-09-16 01:08:14.276634 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.276644 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.276655 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:08:14.276666 | orchestrator | 2025-09-16 01:08:14.276677 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-16 01:08:14.276688 | orchestrator | Tuesday 16 September 2025 01:02:05 +0000 (0:00:12.393) 0:02:28.290 ***** 2025-09-16 01:08:14.276699 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:08:14.276710 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.276720 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.276731 | orchestrator | 2025-09-16 01:08:14.276742 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-16 01:08:14.276753 | orchestrator | Tuesday 16 September 2025 01:02:06 +0000 (0:00:01.064) 0:02:29.355 ***** 2025-09-16 01:08:14.276764 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.276775 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.276785 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.276796 | orchestrator | 2025-09-16 01:08:14.276807 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-16 01:08:14.276818 | orchestrator | Tuesday 16 September 2025 01:02:19 +0000 (0:00:12.118) 0:02:41.473 ***** 2025-09-16 01:08:14.276829 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.276839 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.276850 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.276861 | orchestrator | 2025-09-16 01:08:14.276872 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-16 01:08:14.276883 | orchestrator | Tuesday 16 September 2025 01:02:20 +0000 (0:00:01.620) 0:02:43.094 ***** 2025-09-16 01:08:14.276893 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.276904 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.276915 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.276925 | orchestrator | 2025-09-16 01:08:14.276936 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-16 01:08:14.276947 | orchestrator | 2025-09-16 01:08:14.276958 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-16 01:08:14.276969 | orchestrator | Tuesday 16 September 2025 01:02:21 +0000 (0:00:00.964) 0:02:44.058 ***** 2025-09-16 01:08:14.276981 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:08:14.276993 | orchestrator | 2025-09-16 01:08:14.277004 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-16 01:08:14.277015 | orchestrator | Tuesday 16 September 2025 01:02:22 +0000 (0:00:01.211) 0:02:45.270 ***** 2025-09-16 01:08:14.277027 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-16 01:08:14.277038 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-16 01:08:14.277049 | orchestrator | 2025-09-16 01:08:14.277060 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-16 01:08:14.277071 | orchestrator | Tuesday 16 September 2025 01:02:26 +0000 (0:00:03.718) 0:02:48.988 ***** 2025-09-16 01:08:14.277082 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-16 01:08:14.277095 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-16 01:08:14.277109 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-16 01:08:14.277129 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-16 01:08:14.277142 | orchestrator | 2025-09-16 01:08:14.277155 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-16 01:08:14.277195 | orchestrator | Tuesday 16 September 2025 01:02:32 +0000 (0:00:06.071) 0:02:55.059 ***** 2025-09-16 01:08:14.277208 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-16 01:08:14.277220 | orchestrator | 2025-09-16 01:08:14.277233 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-16 01:08:14.277245 | orchestrator | Tuesday 16 September 2025 01:02:36 +0000 (0:00:03.523) 0:02:58.582 ***** 2025-09-16 01:08:14.277258 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-16 01:08:14.277270 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-16 01:08:14.277283 | orchestrator | 2025-09-16 01:08:14.277296 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-16 01:08:14.277308 | orchestrator | Tuesday 16 September 2025 01:02:40 +0000 (0:00:03.976) 0:03:02.559 ***** 2025-09-16 01:08:14.277321 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-16 01:08:14.277333 | orchestrator | 2025-09-16 01:08:14.277345 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-16 01:08:14.277358 | orchestrator | Tuesday 16 September 2025 01:02:44 +0000 (0:00:04.009) 0:03:06.569 ***** 2025-09-16 01:08:14.277371 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-16 01:08:14.277383 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-16 01:08:14.277396 | orchestrator | 2025-09-16 01:08:14.277408 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-16 01:08:14.277502 | orchestrator | Tuesday 16 September 2025 01:02:52 +0000 (0:00:08.618) 0:03:15.187 ***** 2025-09-16 01:08:14.277533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 01:08:14.277553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 01:08:14.277580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 01:08:14.277628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.277644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.277656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.277667 | orchestrator | 2025-09-16 01:08:14.277679 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-16 01:08:14.277690 | orchestrator | Tuesday 16 September 2025 01:02:54 +0000 (0:00:01.496) 0:03:16.684 ***** 2025-09-16 01:08:14.277700 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.277712 | orchestrator | 2025-09-16 01:08:14.277722 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-16 01:08:14.277733 | orchestrator | Tuesday 16 September 2025 01:02:54 +0000 (0:00:00.207) 0:03:16.892 ***** 2025-09-16 01:08:14.277744 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.277755 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.277765 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.277776 | orchestrator | 2025-09-16 01:08:14.277787 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-16 01:08:14.277798 | orchestrator | Tuesday 16 September 2025 01:02:54 +0000 (0:00:00.281) 0:03:17.173 ***** 2025-09-16 01:08:14.277817 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-16 01:08:14.277828 | orchestrator | 2025-09-16 01:08:14.277839 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-16 01:08:14.277849 | orchestrator | Tuesday 16 September 2025 01:02:56 +0000 (0:00:01.282) 0:03:18.455 ***** 2025-09-16 01:08:14.277860 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.277871 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.277882 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.277893 | orchestrator | 2025-09-16 01:08:14.277904 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-16 01:08:14.277914 | orchestrator | Tuesday 16 September 2025 01:02:56 +0000 (0:00:00.263) 0:03:18.719 ***** 2025-09-16 01:08:14.277925 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:08:14.277936 | orchestrator | 2025-09-16 01:08:14.277947 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-16 01:08:14.277958 | orchestrator | Tuesday 16 September 2025 01:02:56 +0000 (0:00:00.475) 0:03:19.195 ***** 2025-09-16 01:08:14.277976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 01:08:14.278070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 01:08:14.278090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 01:08:14.278116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.278131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.278214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.278231 | orchestrator | 2025-09-16 01:08:14.278244 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-16 01:08:14.278258 | orchestrator | Tuesday 16 September 2025 01:02:59 +0000 (0:00:02.741) 0:03:21.936 ***** 2025-09-16 01:08:14.278273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-16 01:08:14.278298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.278313 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.278341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-16 01:08:14.278356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.278369 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.278418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-16 01:08:14.278435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.278460 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.278474 | orchestrator | 2025-09-16 01:08:14.278488 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-16 01:08:14.278501 | orchestrator | Tuesday 16 September 2025 01:03:00 +0000 (0:00:00.872) 0:03:22.809 ***** 2025-09-16 01:08:14.278519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-16 01:08:14.278532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.278543 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.278588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-16 01:08:14.278610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-16 01:08:14.278623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.278640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.278653 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.278664 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.278675 | orchestrator | 2025-09-16 01:08:14.278686 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-16 01:08:14.278697 | orchestrator | Tuesday 16 September 2025 01:03:01 +0000 (0:00:01.539) 0:03:24.348 ***** 2025-09-16 01:08:14.278739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 01:08:14.278761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 01:08:14.278780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 01:08:14.278793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.278835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.278849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.278868 | orchestrator | 2025-09-16 01:08:14.278879 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-16 01:08:14.278890 | orchestrator | Tuesday 16 September 2025 01:03:04 +0000 (0:00:02.625) 0:03:26.973 ***** 2025-09-16 01:08:14.278902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 01:08:14.278921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 01:08:14.278963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 01:08:14.278985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.278996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.279008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.279020 | orchestrator | 2025-09-16 01:08:14.279031 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-16 01:08:14.279043 | orchestrator | Tuesday 16 September 2025 01:03:12 +0000 (0:00:07.865) 0:03:34.839 ***** 2025-09-16 01:08:14.279055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-16 01:08:14.279096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.279118 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.279194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-16 01:08:14.279210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.279222 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.279240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-16 01:08:14.279253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.279265 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.279277 | orchestrator | 2025-09-16 01:08:14.279289 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-16 01:08:14.279308 | orchestrator | Tuesday 16 September 2025 01:03:13 +0000 (0:00:01.031) 0:03:35.870 ***** 2025-09-16 01:08:14.279319 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.279331 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:08:14.279342 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:08:14.279354 | orchestrator | 2025-09-16 01:08:14.279400 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-16 01:08:14.279413 | orchestrator | Tuesday 16 September 2025 01:03:14 +0000 (0:00:01.411) 0:03:37.281 ***** 2025-09-16 01:08:14.279424 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.279435 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.279447 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.279458 | orchestrator | 2025-09-16 01:08:14.279469 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-16 01:08:14.279481 | orchestrator | Tuesday 16 September 2025 01:03:15 +0000 (0:00:00.255) 0:03:37.537 ***** 2025-09-16 01:08:14.279493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 01:08:14.279506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 01:08:14.279553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-16 01:08:14.279579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.279591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.279603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.279614 | orchestrator | 2025-09-16 01:08:14.279626 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-16 01:08:14.279637 | orchestrator | Tuesday 16 September 2025 01:03:17 +0000 (0:00:02.027) 0:03:39.564 ***** 2025-09-16 01:08:14.279647 | orchestrator | 2025-09-16 01:08:14.279659 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-16 01:08:14.279670 | orchestrator | Tuesday 16 September 2025 01:03:17 +0000 (0:00:00.125) 0:03:39.690 ***** 2025-09-16 01:08:14.279681 | orchestrator | 2025-09-16 01:08:14.279692 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-16 01:08:14.279703 | orchestrator | Tuesday 16 September 2025 01:03:17 +0000 (0:00:00.100) 0:03:39.790 ***** 2025-09-16 01:08:14.279714 | orchestrator | 2025-09-16 01:08:14.279725 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-16 01:08:14.279736 | orchestrator | Tuesday 16 September 2025 01:03:17 +0000 (0:00:00.100) 0:03:39.891 ***** 2025-09-16 01:08:14.279747 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.279757 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:08:14.279768 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:08:14.279779 | orchestrator | 2025-09-16 01:08:14.279790 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-16 01:08:14.279801 | orchestrator | Tuesday 16 September 2025 01:03:41 +0000 (0:00:24.190) 0:04:04.081 ***** 2025-09-16 01:08:14.279811 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.279822 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:08:14.279840 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:08:14.279851 | orchestrator | 2025-09-16 01:08:14.279862 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-16 01:08:14.279872 | orchestrator | 2025-09-16 01:08:14.279883 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-16 01:08:14.279899 | orchestrator | Tuesday 16 September 2025 01:03:50 +0000 (0:00:08.474) 0:04:12.555 ***** 2025-09-16 01:08:14.279911 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:08:14.279923 | orchestrator | 2025-09-16 01:08:14.279934 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-16 01:08:14.279945 | orchestrator | Tuesday 16 September 2025 01:03:51 +0000 (0:00:01.073) 0:04:13.628 ***** 2025-09-16 01:08:14.279957 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.279968 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.279979 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.279990 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.280001 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.280012 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.280023 | orchestrator | 2025-09-16 01:08:14.280035 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-16 01:08:14.280046 | orchestrator | Tuesday 16 September 2025 01:03:51 +0000 (0:00:00.550) 0:04:14.179 ***** 2025-09-16 01:08:14.280057 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.280069 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.280080 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.280091 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 01:08:14.280102 | orchestrator | 2025-09-16 01:08:14.280114 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-16 01:08:14.280154 | orchestrator | Tuesday 16 September 2025 01:03:52 +0000 (0:00:01.029) 0:04:15.208 ***** 2025-09-16 01:08:14.280185 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-16 01:08:14.280198 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-16 01:08:14.280209 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-16 01:08:14.280220 | orchestrator | 2025-09-16 01:08:14.280231 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-16 01:08:14.280243 | orchestrator | Tuesday 16 September 2025 01:03:53 +0000 (0:00:00.932) 0:04:16.141 ***** 2025-09-16 01:08:14.280254 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-16 01:08:14.280266 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-16 01:08:14.280277 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-16 01:08:14.280288 | orchestrator | 2025-09-16 01:08:14.280300 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-16 01:08:14.280311 | orchestrator | Tuesday 16 September 2025 01:03:55 +0000 (0:00:01.409) 0:04:17.550 ***** 2025-09-16 01:08:14.280322 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-16 01:08:14.280333 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.280344 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-16 01:08:14.280355 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.280367 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-16 01:08:14.280378 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.280389 | orchestrator | 2025-09-16 01:08:14.280400 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-16 01:08:14.280411 | orchestrator | Tuesday 16 September 2025 01:03:55 +0000 (0:00:00.785) 0:04:18.335 ***** 2025-09-16 01:08:14.280423 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-16 01:08:14.280434 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-16 01:08:14.280452 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.280464 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-16 01:08:14.280476 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-16 01:08:14.280487 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.280498 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-16 01:08:14.280509 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-16 01:08:14.280520 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-16 01:08:14.280532 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.280543 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-16 01:08:14.280554 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-16 01:08:14.280565 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-16 01:08:14.280577 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-16 01:08:14.280588 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-16 01:08:14.280599 | orchestrator | 2025-09-16 01:08:14.280610 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-16 01:08:14.280621 | orchestrator | Tuesday 16 September 2025 01:03:57 +0000 (0:00:01.294) 0:04:19.630 ***** 2025-09-16 01:08:14.280632 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.280644 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.280655 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.280666 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:08:14.280677 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:08:14.280688 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:08:14.280700 | orchestrator | 2025-09-16 01:08:14.280711 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-16 01:08:14.280722 | orchestrator | Tuesday 16 September 2025 01:03:58 +0000 (0:00:01.535) 0:04:21.165 ***** 2025-09-16 01:08:14.280733 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.280744 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.280755 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.280771 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:08:14.280783 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:08:14.280794 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:08:14.280805 | orchestrator | 2025-09-16 01:08:14.280816 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-16 01:08:14.280828 | orchestrator | Tuesday 16 September 2025 01:04:00 +0000 (0:00:01.958) 0:04:23.124 ***** 2025-09-16 01:08:14.280840 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-16 01:08:14.280884 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-16 01:08:14.280905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-16 01:08:14.280918 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-16 01:08:14.280930 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-16 01:08:14.280947 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-16 01:08:14.280960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281020 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281033 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281047 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281117 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281149 | orchestrator | 2025-09-16 01:08:14.281213 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-16 01:08:14.281227 | orchestrator | Tuesday 16 September 2025 01:04:04 +0000 (0:00:03.436) 0:04:26.561 ***** 2025-09-16 01:08:14.281238 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:08:14.281250 | orchestrator | 2025-09-16 01:08:14.281261 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-16 01:08:14.281272 | orchestrator | Tuesday 16 September 2025 01:04:05 +0000 (0:00:01.204) 0:04:27.765 ***** 2025-09-16 01:08:14.281283 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281301 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281404 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281421 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281432 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281523 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281538 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.281558 | orchestrator | 2025-09-16 01:08:14.281569 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-16 01:08:14.281579 | orchestrator | Tuesday 16 September 2025 01:04:09 +0000 (0:00:04.287) 0:04:32.052 ***** 2025-09-16 01:08:14.281615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-16 01:08:14.281627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-16 01:08:14.281637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.281647 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.281662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-16 01:08:14.281673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-16 01:08:14.281716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.281728 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.281738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-16 01:08:14.281749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.281759 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.281770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-16 01:08:14.281780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-16 01:08:14.281803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.281814 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.281851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-16 01:08:14.281863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.281873 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.281883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-16 01:08:14.281894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.281904 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.281914 | orchestrator | 2025-09-16 01:08:14.281924 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-16 01:08:14.281933 | orchestrator | Tuesday 16 September 2025 01:04:11 +0000 (0:00:02.099) 0:04:34.152 ***** 2025-09-16 01:08:14.281948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-16 01:08:14.281966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-16 01:08:14.282002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-16 01:08:14.282013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-16 01:08:14.282073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.282084 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.282102 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.282112 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.282129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-16 01:08:14.282186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-16 01:08:14.282199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.282210 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.282220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-16 01:08:14.282230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.282247 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.282257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-16 01:08:14.282273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.282283 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.282294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-16 01:08:14.282329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.282341 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.282351 | orchestrator | 2025-09-16 01:08:14.282361 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-16 01:08:14.282371 | orchestrator | Tuesday 16 September 2025 01:04:13 +0000 (0:00:02.261) 0:04:36.414 ***** 2025-09-16 01:08:14.282381 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.282391 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.282401 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.282411 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-16 01:08:14.282421 | orchestrator | 2025-09-16 01:08:14.282431 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-16 01:08:14.282441 | orchestrator | Tuesday 16 September 2025 01:04:14 +0000 (0:00:01.011) 0:04:37.426 ***** 2025-09-16 01:08:14.282451 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-16 01:08:14.282461 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-16 01:08:14.282470 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-16 01:08:14.282480 | orchestrator | 2025-09-16 01:08:14.282490 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-16 01:08:14.282500 | orchestrator | Tuesday 16 September 2025 01:04:15 +0000 (0:00:00.852) 0:04:38.278 ***** 2025-09-16 01:08:14.282510 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-16 01:08:14.282520 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-16 01:08:14.282538 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-16 01:08:14.282548 | orchestrator | 2025-09-16 01:08:14.282558 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-16 01:08:14.282568 | orchestrator | Tuesday 16 September 2025 01:04:16 +0000 (0:00:01.040) 0:04:39.319 ***** 2025-09-16 01:08:14.282577 | orchestrator | ok: [testbed-node-4] 2025-09-16 01:08:14.282587 | orchestrator | ok: [testbed-node-3] 2025-09-16 01:08:14.282597 | orchestrator | ok: [testbed-node-5] 2025-09-16 01:08:14.282607 | orchestrator | 2025-09-16 01:08:14.282617 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-16 01:08:14.282627 | orchestrator | Tuesday 16 September 2025 01:04:17 +0000 (0:00:00.474) 0:04:39.794 ***** 2025-09-16 01:08:14.282637 | orchestrator | ok: [testbed-node-3] 2025-09-16 01:08:14.282647 | orchestrator | ok: [testbed-node-4] 2025-09-16 01:08:14.282657 | orchestrator | ok: [testbed-node-5] 2025-09-16 01:08:14.282667 | orchestrator | 2025-09-16 01:08:14.282676 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-16 01:08:14.282687 | orchestrator | Tuesday 16 September 2025 01:04:18 +0000 (0:00:00.674) 0:04:40.468 ***** 2025-09-16 01:08:14.282696 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-16 01:08:14.282706 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-16 01:08:14.282717 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-16 01:08:14.282726 | orchestrator | 2025-09-16 01:08:14.282737 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-16 01:08:14.282746 | orchestrator | Tuesday 16 September 2025 01:04:19 +0000 (0:00:01.314) 0:04:41.782 ***** 2025-09-16 01:08:14.282756 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-16 01:08:14.282766 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-16 01:08:14.282776 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-16 01:08:14.282786 | orchestrator | 2025-09-16 01:08:14.282796 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-16 01:08:14.282806 | orchestrator | Tuesday 16 September 2025 01:04:20 +0000 (0:00:01.429) 0:04:43.212 ***** 2025-09-16 01:08:14.282816 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-16 01:08:14.282830 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-16 01:08:14.282840 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-16 01:08:14.282850 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-16 01:08:14.282860 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-16 01:08:14.282870 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-16 01:08:14.282880 | orchestrator | 2025-09-16 01:08:14.282890 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-16 01:08:14.282900 | orchestrator | Tuesday 16 September 2025 01:04:24 +0000 (0:00:03.983) 0:04:47.196 ***** 2025-09-16 01:08:14.282910 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.282920 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.282929 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.282939 | orchestrator | 2025-09-16 01:08:14.282949 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-16 01:08:14.282959 | orchestrator | Tuesday 16 September 2025 01:04:25 +0000 (0:00:00.488) 0:04:47.684 ***** 2025-09-16 01:08:14.282969 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.282979 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.282989 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.282999 | orchestrator | 2025-09-16 01:08:14.283009 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-16 01:08:14.283019 | orchestrator | Tuesday 16 September 2025 01:04:25 +0000 (0:00:00.325) 0:04:48.009 ***** 2025-09-16 01:08:14.283029 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:08:14.283039 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:08:14.283056 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:08:14.283066 | orchestrator | 2025-09-16 01:08:14.283104 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-16 01:08:14.283116 | orchestrator | Tuesday 16 September 2025 01:04:26 +0000 (0:00:01.235) 0:04:49.245 ***** 2025-09-16 01:08:14.283127 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-16 01:08:14.283138 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-16 01:08:14.283148 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-16 01:08:14.283158 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-16 01:08:14.283185 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-16 01:08:14.283195 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-16 01:08:14.283205 | orchestrator | 2025-09-16 01:08:14.283215 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-16 01:08:14.283225 | orchestrator | Tuesday 16 September 2025 01:04:30 +0000 (0:00:03.280) 0:04:52.526 ***** 2025-09-16 01:08:14.283235 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-16 01:08:14.283245 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-16 01:08:14.283255 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-16 01:08:14.283265 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-16 01:08:14.283275 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:08:14.283285 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-16 01:08:14.283294 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:08:14.283304 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-16 01:08:14.283314 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:08:14.283324 | orchestrator | 2025-09-16 01:08:14.283333 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-16 01:08:14.283343 | orchestrator | Tuesday 16 September 2025 01:04:33 +0000 (0:00:03.035) 0:04:55.561 ***** 2025-09-16 01:08:14.283353 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.283363 | orchestrator | 2025-09-16 01:08:14.283373 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-16 01:08:14.283383 | orchestrator | Tuesday 16 September 2025 01:04:33 +0000 (0:00:00.129) 0:04:55.690 ***** 2025-09-16 01:08:14.283393 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.283402 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.283412 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.283422 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.283432 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.283442 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.283451 | orchestrator | 2025-09-16 01:08:14.283461 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-16 01:08:14.283472 | orchestrator | Tuesday 16 September 2025 01:04:33 +0000 (0:00:00.663) 0:04:56.354 ***** 2025-09-16 01:08:14.283481 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-16 01:08:14.283491 | orchestrator | 2025-09-16 01:08:14.283501 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-16 01:08:14.283511 | orchestrator | Tuesday 16 September 2025 01:04:34 +0000 (0:00:00.659) 0:04:57.013 ***** 2025-09-16 01:08:14.283521 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.283531 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.283541 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.283550 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.283568 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.283577 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.283587 | orchestrator | 2025-09-16 01:08:14.283597 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-16 01:08:14.283607 | orchestrator | Tuesday 16 September 2025 01:04:35 +0000 (0:00:00.829) 0:04:57.842 ***** 2025-09-16 01:08:14.283625 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283644 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283709 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283741 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283809 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283820 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283830 | orchestrator | 2025-09-16 01:08:14.283840 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-16 01:08:14.283850 | orchestrator | Tuesday 16 September 2025 01:04:39 +0000 (0:00:04.224) 0:05:02.067 ***** 2025-09-16 01:08:14.283861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-16 01:08:14.283871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-16 01:08:14.283896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-16 01:08:14.283907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-16 01:08:14.283924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-16 01:08:14.283935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-16 01:08:14.283946 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.283996 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.284006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-16 01:08:14.284017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-16 01:08:14.284027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.284044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.284059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.284069 | orchestrator | 2025-09-16 01:08:14.284080 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-16 01:08:14.284090 | orchestrator | Tuesday 16 September 2025 01:04:45 +0000 (0:00:05.962) 0:05:08.029 ***** 2025-09-16 01:08:14.284100 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.284110 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.284121 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.284131 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.284141 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.284150 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.284178 | orchestrator | 2025-09-16 01:08:14.284189 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-16 01:08:14.284199 | orchestrator | Tuesday 16 September 2025 01:04:46 +0000 (0:00:01.261) 0:05:09.290 ***** 2025-09-16 01:08:14.284209 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-16 01:08:14.284219 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-16 01:08:14.284229 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-16 01:08:14.284239 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-16 01:08:14.284255 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-16 01:08:14.284265 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-16 01:08:14.284276 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.284286 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-16 01:08:14.284295 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-16 01:08:14.284306 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.284315 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-16 01:08:14.284325 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.284335 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-16 01:08:14.284345 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-16 01:08:14.284355 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-16 01:08:14.284372 | orchestrator | 2025-09-16 01:08:14.284382 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-16 01:08:14.284392 | orchestrator | Tuesday 16 September 2025 01:04:51 +0000 (0:00:04.846) 0:05:14.137 ***** 2025-09-16 01:08:14.284402 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.284412 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.284422 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.284432 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.284442 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.284452 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.284462 | orchestrator | 2025-09-16 01:08:14.284473 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-16 01:08:14.284483 | orchestrator | Tuesday 16 September 2025 01:04:52 +0000 (0:00:00.538) 0:05:14.675 ***** 2025-09-16 01:08:14.284493 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-16 01:08:14.284503 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-16 01:08:14.284513 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-16 01:08:14.284523 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-16 01:08:14.284533 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-16 01:08:14.284544 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-16 01:08:14.284553 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-16 01:08:14.284563 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-16 01:08:14.284573 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-16 01:08:14.284583 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-16 01:08:14.284593 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.284603 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-16 01:08:14.284613 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.284628 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-16 01:08:14.284638 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.284648 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-16 01:08:14.284658 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-16 01:08:14.284668 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-16 01:08:14.284678 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-16 01:08:14.284688 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-16 01:08:14.284698 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-16 01:08:14.284708 | orchestrator | 2025-09-16 01:08:14.284718 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-16 01:08:14.284728 | orchestrator | Tuesday 16 September 2025 01:04:57 +0000 (0:00:05.340) 0:05:20.016 ***** 2025-09-16 01:08:14.284749 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-16 01:08:14.284759 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-16 01:08:14.284776 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-16 01:08:14.284786 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-16 01:08:14.284796 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-16 01:08:14.284806 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-16 01:08:14.284816 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-16 01:08:14.284826 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-16 01:08:14.284836 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-16 01:08:14.284846 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-16 01:08:14.284855 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-16 01:08:14.284865 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-16 01:08:14.284875 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-16 01:08:14.284885 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-16 01:08:14.284895 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-16 01:08:14.284904 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-16 01:08:14.284914 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.284924 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-16 01:08:14.284934 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.284944 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-16 01:08:14.284954 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.284964 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-16 01:08:14.284974 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-16 01:08:14.284984 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-16 01:08:14.284994 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-16 01:08:14.285004 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-16 01:08:14.285013 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-16 01:08:14.285023 | orchestrator | 2025-09-16 01:08:14.285033 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-16 01:08:14.285043 | orchestrator | Tuesday 16 September 2025 01:05:07 +0000 (0:00:09.634) 0:05:29.651 ***** 2025-09-16 01:08:14.285053 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.285063 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.285073 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.285083 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.285093 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.285102 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.285112 | orchestrator | 2025-09-16 01:08:14.285122 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-16 01:08:14.285132 | orchestrator | Tuesday 16 September 2025 01:05:07 +0000 (0:00:00.621) 0:05:30.272 ***** 2025-09-16 01:08:14.285142 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.285158 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.285217 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.285227 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.285236 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.285246 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.285255 | orchestrator | 2025-09-16 01:08:14.285273 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-16 01:08:14.285283 | orchestrator | Tuesday 16 September 2025 01:05:08 +0000 (0:00:00.540) 0:05:30.813 ***** 2025-09-16 01:08:14.285293 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.285302 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:08:14.285311 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.285321 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.285330 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:08:14.285340 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:08:14.285349 | orchestrator | 2025-09-16 01:08:14.285359 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-16 01:08:14.285369 | orchestrator | Tuesday 16 September 2025 01:05:10 +0000 (0:00:02.112) 0:05:32.925 ***** 2025-09-16 01:08:14.285386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-16 01:08:14.285397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-16 01:08:14.285407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.285418 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.285428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-16 01:08:14.285447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-16 01:08:14.285455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.285464 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.285477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-16 01:08:14.285486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-16 01:08:14.285495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-16 01:08:14.285504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.285522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.285531 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.285539 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.285548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-16 01:08:14.285561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.285569 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.285578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-16 01:08:14.285586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-16 01:08:14.285602 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.285610 | orchestrator | 2025-09-16 01:08:14.285619 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-16 01:08:14.285627 | orchestrator | Tuesday 16 September 2025 01:05:12 +0000 (0:00:02.212) 0:05:35.137 ***** 2025-09-16 01:08:14.285635 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-16 01:08:14.285643 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-16 01:08:14.285651 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.285660 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-16 01:08:14.285668 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-16 01:08:14.285676 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.285684 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-16 01:08:14.285692 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-16 01:08:14.285700 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.285708 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-16 01:08:14.285716 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-16 01:08:14.285725 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.285732 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-16 01:08:14.285740 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-16 01:08:14.285749 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.285757 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-16 01:08:14.285765 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-16 01:08:14.285773 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.285781 | orchestrator | 2025-09-16 01:08:14.285789 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-16 01:08:14.285801 | orchestrator | Tuesday 16 September 2025 01:05:13 +0000 (0:00:00.634) 0:05:35.772 ***** 2025-09-16 01:08:14.285810 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-16 01:08:14.285823 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-16 01:08:14.285832 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-16 01:08:14.285846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-16 01:08:14.285855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-16 01:08:14.285868 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-16 01:08:14.285876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-16 01:08:14.285890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-16 01:08:14.285899 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-16 01:08:14.285913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.285922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.285930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.285942 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.285955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.285964 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-16 01:08:14.285978 | orchestrator | 2025-09-16 01:08:14.285987 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-16 01:08:14.285995 | orchestrator | Tuesday 16 September 2025 01:05:15 +0000 (0:00:02.653) 0:05:38.425 ***** 2025-09-16 01:08:14.286003 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.286012 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.286045 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.286054 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.286062 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.286070 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.286078 | orchestrator | 2025-09-16 01:08:14.286087 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-16 01:08:14.286095 | orchestrator | Tuesday 16 September 2025 01:05:16 +0000 (0:00:00.904) 0:05:39.330 ***** 2025-09-16 01:08:14.286103 | orchestrator | 2025-09-16 01:08:14.286111 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-16 01:08:14.286120 | orchestrator | Tuesday 16 September 2025 01:05:17 +0000 (0:00:00.174) 0:05:39.504 ***** 2025-09-16 01:08:14.286128 | orchestrator | 2025-09-16 01:08:14.286136 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-16 01:08:14.286144 | orchestrator | Tuesday 16 September 2025 01:05:17 +0000 (0:00:00.207) 0:05:39.712 ***** 2025-09-16 01:08:14.286152 | orchestrator | 2025-09-16 01:08:14.286175 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-16 01:08:14.286183 | orchestrator | Tuesday 16 September 2025 01:05:17 +0000 (0:00:00.140) 0:05:39.853 ***** 2025-09-16 01:08:14.286192 | orchestrator | 2025-09-16 01:08:14.286200 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-16 01:08:14.286208 | orchestrator | Tuesday 16 September 2025 01:05:17 +0000 (0:00:00.177) 0:05:40.031 ***** 2025-09-16 01:08:14.286216 | orchestrator | 2025-09-16 01:08:14.286224 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-16 01:08:14.286232 | orchestrator | Tuesday 16 September 2025 01:05:17 +0000 (0:00:00.160) 0:05:40.191 ***** 2025-09-16 01:08:14.286241 | orchestrator | 2025-09-16 01:08:14.286249 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-16 01:08:14.286257 | orchestrator | Tuesday 16 September 2025 01:05:18 +0000 (0:00:00.314) 0:05:40.506 ***** 2025-09-16 01:08:14.286265 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.286272 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:08:14.286280 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:08:14.286288 | orchestrator | 2025-09-16 01:08:14.286296 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-16 01:08:14.286304 | orchestrator | Tuesday 16 September 2025 01:05:25 +0000 (0:00:07.033) 0:05:47.540 ***** 2025-09-16 01:08:14.286311 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.286319 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:08:14.286327 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:08:14.286335 | orchestrator | 2025-09-16 01:08:14.286347 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-16 01:08:14.286355 | orchestrator | Tuesday 16 September 2025 01:05:42 +0000 (0:00:17.866) 0:06:05.406 ***** 2025-09-16 01:08:14.286363 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:08:14.286371 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:08:14.286379 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:08:14.286387 | orchestrator | 2025-09-16 01:08:14.286395 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-16 01:08:14.286408 | orchestrator | Tuesday 16 September 2025 01:06:07 +0000 (0:00:24.897) 0:06:30.304 ***** 2025-09-16 01:08:14.286416 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:08:14.286424 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:08:14.286432 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:08:14.286440 | orchestrator | 2025-09-16 01:08:14.286448 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-16 01:08:14.286456 | orchestrator | Tuesday 16 September 2025 01:06:43 +0000 (0:00:35.643) 0:07:05.948 ***** 2025-09-16 01:08:14.286464 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:08:14.286472 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:08:14.286480 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:08:14.286488 | orchestrator | 2025-09-16 01:08:14.286496 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-16 01:08:14.286504 | orchestrator | Tuesday 16 September 2025 01:06:44 +0000 (0:00:00.822) 0:07:06.770 ***** 2025-09-16 01:08:14.286511 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:08:14.286519 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:08:14.286527 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:08:14.286535 | orchestrator | 2025-09-16 01:08:14.286543 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-16 01:08:14.286556 | orchestrator | Tuesday 16 September 2025 01:06:45 +0000 (0:00:00.836) 0:07:07.607 ***** 2025-09-16 01:08:14.286564 | orchestrator | changed: [testbed-node-3] 2025-09-16 01:08:14.286573 | orchestrator | changed: [testbed-node-5] 2025-09-16 01:08:14.286581 | orchestrator | changed: [testbed-node-4] 2025-09-16 01:08:14.286589 | orchestrator | 2025-09-16 01:08:14.286597 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-16 01:08:14.286606 | orchestrator | Tuesday 16 September 2025 01:07:03 +0000 (0:00:18.471) 0:07:26.078 ***** 2025-09-16 01:08:14.286614 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.286622 | orchestrator | 2025-09-16 01:08:14.286630 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-16 01:08:14.286638 | orchestrator | Tuesday 16 September 2025 01:07:03 +0000 (0:00:00.125) 0:07:26.204 ***** 2025-09-16 01:08:14.286646 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.286654 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.286662 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.286671 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.286679 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.286687 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-16 01:08:14.286695 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-16 01:08:14.286704 | orchestrator | 2025-09-16 01:08:14.286712 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-16 01:08:14.286720 | orchestrator | Tuesday 16 September 2025 01:07:25 +0000 (0:00:21.939) 0:07:48.143 ***** 2025-09-16 01:08:14.286728 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.286736 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.286744 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.286752 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.286760 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.286768 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.286776 | orchestrator | 2025-09-16 01:08:14.286784 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-16 01:08:14.286793 | orchestrator | Tuesday 16 September 2025 01:07:34 +0000 (0:00:08.533) 0:07:56.676 ***** 2025-09-16 01:08:14.286801 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.286809 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.286817 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.286825 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.286833 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.286847 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-09-16 01:08:14.286855 | orchestrator | 2025-09-16 01:08:14.286863 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-16 01:08:14.286871 | orchestrator | Tuesday 16 September 2025 01:07:37 +0000 (0:00:03.577) 0:08:00.254 ***** 2025-09-16 01:08:14.286880 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-16 01:08:14.286888 | orchestrator | 2025-09-16 01:08:14.286896 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-16 01:08:14.286904 | orchestrator | Tuesday 16 September 2025 01:07:50 +0000 (0:00:12.894) 0:08:13.148 ***** 2025-09-16 01:08:14.286912 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-16 01:08:14.286921 | orchestrator | 2025-09-16 01:08:14.286929 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-16 01:08:14.286937 | orchestrator | Tuesday 16 September 2025 01:07:51 +0000 (0:00:01.159) 0:08:14.308 ***** 2025-09-16 01:08:14.286945 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.286953 | orchestrator | 2025-09-16 01:08:14.286962 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-16 01:08:14.286970 | orchestrator | Tuesday 16 September 2025 01:07:53 +0000 (0:00:01.219) 0:08:15.527 ***** 2025-09-16 01:08:14.286978 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-16 01:08:14.286986 | orchestrator | 2025-09-16 01:08:14.286994 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-16 01:08:14.287002 | orchestrator | Tuesday 16 September 2025 01:08:05 +0000 (0:00:12.093) 0:08:27.621 ***** 2025-09-16 01:08:14.287010 | orchestrator | ok: [testbed-node-3] 2025-09-16 01:08:14.287019 | orchestrator | ok: [testbed-node-4] 2025-09-16 01:08:14.287027 | orchestrator | ok: [testbed-node-5] 2025-09-16 01:08:14.287035 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:08:14.287050 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:08:14.287058 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:08:14.287066 | orchestrator | 2025-09-16 01:08:14.287075 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-16 01:08:14.287083 | orchestrator | 2025-09-16 01:08:14.287091 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-16 01:08:14.287099 | orchestrator | Tuesday 16 September 2025 01:08:06 +0000 (0:00:01.805) 0:08:29.426 ***** 2025-09-16 01:08:14.287107 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:08:14.287115 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:08:14.287124 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:08:14.287132 | orchestrator | 2025-09-16 01:08:14.287140 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-16 01:08:14.287148 | orchestrator | 2025-09-16 01:08:14.287156 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-16 01:08:14.287178 | orchestrator | Tuesday 16 September 2025 01:08:08 +0000 (0:00:01.158) 0:08:30.584 ***** 2025-09-16 01:08:14.287187 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.287195 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.287203 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.287212 | orchestrator | 2025-09-16 01:08:14.287220 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-16 01:08:14.287228 | orchestrator | 2025-09-16 01:08:14.287236 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-16 01:08:14.287244 | orchestrator | Tuesday 16 September 2025 01:08:08 +0000 (0:00:00.501) 0:08:31.086 ***** 2025-09-16 01:08:14.287253 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-16 01:08:14.287266 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-16 01:08:14.287274 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-16 01:08:14.287283 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-16 01:08:14.287291 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-16 01:08:14.287307 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-16 01:08:14.287315 | orchestrator | skipping: [testbed-node-3] 2025-09-16 01:08:14.287323 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-16 01:08:14.287332 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-16 01:08:14.287340 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-16 01:08:14.287348 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-16 01:08:14.287356 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-16 01:08:14.287365 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-16 01:08:14.287373 | orchestrator | skipping: [testbed-node-4] 2025-09-16 01:08:14.287381 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-16 01:08:14.287389 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-16 01:08:14.287397 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-16 01:08:14.287405 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-16 01:08:14.287414 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-16 01:08:14.287421 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-16 01:08:14.287430 | orchestrator | skipping: [testbed-node-5] 2025-09-16 01:08:14.287438 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-16 01:08:14.287446 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-16 01:08:14.287454 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-16 01:08:14.287463 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-16 01:08:14.287471 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-16 01:08:14.287479 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-16 01:08:14.287487 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.287495 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-16 01:08:14.287503 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-16 01:08:14.287512 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-16 01:08:14.287520 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-16 01:08:14.287528 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-16 01:08:14.287536 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-16 01:08:14.287544 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.287552 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-16 01:08:14.287561 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-16 01:08:14.287569 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-16 01:08:14.287577 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-16 01:08:14.287585 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-16 01:08:14.287593 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-16 01:08:14.287601 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.287610 | orchestrator | 2025-09-16 01:08:14.287618 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-16 01:08:14.287626 | orchestrator | 2025-09-16 01:08:14.287634 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-16 01:08:14.287643 | orchestrator | Tuesday 16 September 2025 01:08:09 +0000 (0:00:01.299) 0:08:32.385 ***** 2025-09-16 01:08:14.287651 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-16 01:08:14.287659 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-16 01:08:14.287667 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.287676 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-16 01:08:14.287695 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-16 01:08:14.287703 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.287711 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-16 01:08:14.287719 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-16 01:08:14.287727 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.287736 | orchestrator | 2025-09-16 01:08:14.287744 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-16 01:08:14.287752 | orchestrator | 2025-09-16 01:08:14.287760 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-16 01:08:14.287769 | orchestrator | Tuesday 16 September 2025 01:08:10 +0000 (0:00:00.726) 0:08:33.112 ***** 2025-09-16 01:08:14.287777 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.287785 | orchestrator | 2025-09-16 01:08:14.287793 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-16 01:08:14.287801 | orchestrator | 2025-09-16 01:08:14.287809 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-16 01:08:14.287818 | orchestrator | Tuesday 16 September 2025 01:08:11 +0000 (0:00:00.642) 0:08:33.754 ***** 2025-09-16 01:08:14.287826 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:08:14.287834 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:08:14.287842 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:08:14.287851 | orchestrator | 2025-09-16 01:08:14.287859 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 01:08:14.287867 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-16 01:08:14.287880 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-16 01:08:14.287889 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-16 01:08:14.287898 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-16 01:08:14.287906 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-16 01:08:14.287915 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-16 01:08:14.287923 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-09-16 01:08:14.287931 | orchestrator | 2025-09-16 01:08:14.287939 | orchestrator | 2025-09-16 01:08:14.287948 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 01:08:14.287956 | orchestrator | Tuesday 16 September 2025 01:08:11 +0000 (0:00:00.414) 0:08:34.169 ***** 2025-09-16 01:08:14.287964 | orchestrator | =============================================================================== 2025-09-16 01:08:14.287972 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 35.64s 2025-09-16 01:08:14.287981 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.12s 2025-09-16 01:08:14.287989 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.90s 2025-09-16 01:08:14.287997 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.19s 2025-09-16 01:08:14.288005 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.94s 2025-09-16 01:08:14.288013 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.63s 2025-09-16 01:08:14.288021 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 18.47s 2025-09-16 01:08:14.288035 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.87s 2025-09-16 01:08:14.288043 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.75s 2025-09-16 01:08:14.288051 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.43s 2025-09-16 01:08:14.288060 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.89s 2025-09-16 01:08:14.288068 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.75s 2025-09-16 01:08:14.288076 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.39s 2025-09-16 01:08:14.288084 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.12s 2025-09-16 01:08:14.288092 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.09s 2025-09-16 01:08:14.288100 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 9.63s 2025-09-16 01:08:14.288108 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.62s 2025-09-16 01:08:14.288116 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.53s 2025-09-16 01:08:14.288125 | orchestrator | nova : Restart nova-api container --------------------------------------- 8.47s 2025-09-16 01:08:14.288133 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.31s 2025-09-16 01:08:14.288141 | orchestrator | 2025-09-16 01:08:14 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:14.288153 | orchestrator | 2025-09-16 01:08:14 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:17.316155 | orchestrator | 2025-09-16 01:08:17 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:17.316316 | orchestrator | 2025-09-16 01:08:17 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:20.364556 | orchestrator | 2025-09-16 01:08:20 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:20.364660 | orchestrator | 2025-09-16 01:08:20 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:23.410980 | orchestrator | 2025-09-16 01:08:23 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:23.411088 | orchestrator | 2025-09-16 01:08:23 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:26.452054 | orchestrator | 2025-09-16 01:08:26 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:26.452153 | orchestrator | 2025-09-16 01:08:26 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:29.494903 | orchestrator | 2025-09-16 01:08:29 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:29.495007 | orchestrator | 2025-09-16 01:08:29 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:32.530298 | orchestrator | 2025-09-16 01:08:32 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:32.530407 | orchestrator | 2025-09-16 01:08:32 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:35.585682 | orchestrator | 2025-09-16 01:08:35 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:35.585799 | orchestrator | 2025-09-16 01:08:35 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:38.629067 | orchestrator | 2025-09-16 01:08:38 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:38.629228 | orchestrator | 2025-09-16 01:08:38 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:41.665850 | orchestrator | 2025-09-16 01:08:41 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:41.665982 | orchestrator | 2025-09-16 01:08:41 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:44.712047 | orchestrator | 2025-09-16 01:08:44 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:44.712190 | orchestrator | 2025-09-16 01:08:44 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:47.751635 | orchestrator | 2025-09-16 01:08:47 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:47.751779 | orchestrator | 2025-09-16 01:08:47 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:50.794814 | orchestrator | 2025-09-16 01:08:50 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:50.794940 | orchestrator | 2025-09-16 01:08:50 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:53.838452 | orchestrator | 2025-09-16 01:08:53 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:53.838597 | orchestrator | 2025-09-16 01:08:53 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:56.889740 | orchestrator | 2025-09-16 01:08:56 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:56.889872 | orchestrator | 2025-09-16 01:08:56 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:08:59.934848 | orchestrator | 2025-09-16 01:08:59 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:08:59.934952 | orchestrator | 2025-09-16 01:08:59 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:02.966670 | orchestrator | 2025-09-16 01:09:02 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:02.966769 | orchestrator | 2025-09-16 01:09:02 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:05.997961 | orchestrator | 2025-09-16 01:09:05 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:05.998127 | orchestrator | 2025-09-16 01:09:05 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:09.040076 | orchestrator | 2025-09-16 01:09:09 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:09.040203 | orchestrator | 2025-09-16 01:09:09 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:12.090279 | orchestrator | 2025-09-16 01:09:12 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:12.090402 | orchestrator | 2025-09-16 01:09:12 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:15.133025 | orchestrator | 2025-09-16 01:09:15 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:15.133122 | orchestrator | 2025-09-16 01:09:15 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:18.175288 | orchestrator | 2025-09-16 01:09:18 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:18.175400 | orchestrator | 2025-09-16 01:09:18 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:21.218418 | orchestrator | 2025-09-16 01:09:21 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:21.218523 | orchestrator | 2025-09-16 01:09:21 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:24.253547 | orchestrator | 2025-09-16 01:09:24 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:24.253659 | orchestrator | 2025-09-16 01:09:24 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:27.299200 | orchestrator | 2025-09-16 01:09:27 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:27.299304 | orchestrator | 2025-09-16 01:09:27 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:30.349013 | orchestrator | 2025-09-16 01:09:30 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:30.349112 | orchestrator | 2025-09-16 01:09:30 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:33.388292 | orchestrator | 2025-09-16 01:09:33 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:33.388390 | orchestrator | 2025-09-16 01:09:33 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:36.439204 | orchestrator | 2025-09-16 01:09:36 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:36.439311 | orchestrator | 2025-09-16 01:09:36 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:39.484471 | orchestrator | 2025-09-16 01:09:39 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:39.484573 | orchestrator | 2025-09-16 01:09:39 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:42.526834 | orchestrator | 2025-09-16 01:09:42 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:42.526933 | orchestrator | 2025-09-16 01:09:42 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:45.569237 | orchestrator | 2025-09-16 01:09:45 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:45.569331 | orchestrator | 2025-09-16 01:09:45 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:48.610717 | orchestrator | 2025-09-16 01:09:48 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:48.610826 | orchestrator | 2025-09-16 01:09:48 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:51.656302 | orchestrator | 2025-09-16 01:09:51 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:51.656406 | orchestrator | 2025-09-16 01:09:51 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:54.702796 | orchestrator | 2025-09-16 01:09:54 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:54.702898 | orchestrator | 2025-09-16 01:09:54 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:09:57.748623 | orchestrator | 2025-09-16 01:09:57 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:09:57.748723 | orchestrator | 2025-09-16 01:09:57 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:10:00.793641 | orchestrator | 2025-09-16 01:10:00 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:10:00.793759 | orchestrator | 2025-09-16 01:10:00 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:10:03.829304 | orchestrator | 2025-09-16 01:10:03 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:10:03.829405 | orchestrator | 2025-09-16 01:10:03 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:10:06.865769 | orchestrator | 2025-09-16 01:10:06 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:10:06.865852 | orchestrator | 2025-09-16 01:10:06 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:10:09.907424 | orchestrator | 2025-09-16 01:10:09 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:10:09.907641 | orchestrator | 2025-09-16 01:10:09 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:10:12.956036 | orchestrator | 2025-09-16 01:10:12 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:10:12.956135 | orchestrator | 2025-09-16 01:10:12 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:10:16.007751 | orchestrator | 2025-09-16 01:10:16 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:10:16.007850 | orchestrator | 2025-09-16 01:10:16 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:10:19.063408 | orchestrator | 2025-09-16 01:10:19 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:10:19.063517 | orchestrator | 2025-09-16 01:10:19 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:10:22.104915 | orchestrator | 2025-09-16 01:10:22 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:10:22.105015 | orchestrator | 2025-09-16 01:10:22 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:10:25.139275 | orchestrator | 2025-09-16 01:10:25 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:10:25.139385 | orchestrator | 2025-09-16 01:10:25 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:10:28.185017 | orchestrator | 2025-09-16 01:10:28 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:10:28.185115 | orchestrator | 2025-09-16 01:10:28 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:10:31.223387 | orchestrator | 2025-09-16 01:10:31 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:10:31.223498 | orchestrator | 2025-09-16 01:10:31 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:10:34.260935 | orchestrator | 2025-09-16 01:10:34 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state STARTED 2025-09-16 01:10:34.261035 | orchestrator | 2025-09-16 01:10:34 | INFO  | Wait 1 second(s) until the next check 2025-09-16 01:10:37.301344 | orchestrator | 2025-09-16 01:10:37 | INFO  | Task 49998448-7ee6-4e93-8fe9-32068fcd7e07 is in state SUCCESS 2025-09-16 01:10:37.303201 | orchestrator | 2025-09-16 01:10:37.303240 | orchestrator | 2025-09-16 01:10:37.303251 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-16 01:10:37.303262 | orchestrator | 2025-09-16 01:10:37.303272 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-16 01:10:37.303282 | orchestrator | Tuesday 16 September 2025 01:05:57 +0000 (0:00:00.296) 0:00:00.296 ***** 2025-09-16 01:10:37.303292 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:10:37.303303 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:10:37.303313 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:10:37.303322 | orchestrator | 2025-09-16 01:10:37.303332 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-16 01:10:37.303343 | orchestrator | Tuesday 16 September 2025 01:05:57 +0000 (0:00:00.282) 0:00:00.578 ***** 2025-09-16 01:10:37.303353 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-16 01:10:37.303363 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-16 01:10:37.303372 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-16 01:10:37.303382 | orchestrator | 2025-09-16 01:10:37.303392 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-16 01:10:37.303401 | orchestrator | 2025-09-16 01:10:37.303411 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-16 01:10:37.303421 | orchestrator | Tuesday 16 September 2025 01:05:57 +0000 (0:00:00.469) 0:00:01.047 ***** 2025-09-16 01:10:37.303430 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:10:37.303441 | orchestrator | 2025-09-16 01:10:37.303450 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-16 01:10:37.303460 | orchestrator | Tuesday 16 September 2025 01:05:58 +0000 (0:00:00.532) 0:00:01.580 ***** 2025-09-16 01:10:37.303470 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-16 01:10:37.303507 | orchestrator | 2025-09-16 01:10:37.303518 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-16 01:10:37.303527 | orchestrator | Tuesday 16 September 2025 01:06:02 +0000 (0:00:03.843) 0:00:05.423 ***** 2025-09-16 01:10:37.303537 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-16 01:10:37.303546 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-16 01:10:37.303556 | orchestrator | 2025-09-16 01:10:37.303565 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-16 01:10:37.303575 | orchestrator | Tuesday 16 September 2025 01:06:09 +0000 (0:00:07.159) 0:00:12.582 ***** 2025-09-16 01:10:37.303584 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-16 01:10:37.303594 | orchestrator | 2025-09-16 01:10:37.303604 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-16 01:10:37.303613 | orchestrator | Tuesday 16 September 2025 01:06:13 +0000 (0:00:03.664) 0:00:16.247 ***** 2025-09-16 01:10:37.303623 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-16 01:10:37.303632 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-16 01:10:37.303656 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-16 01:10:37.303666 | orchestrator | 2025-09-16 01:10:37.303676 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-16 01:10:37.303685 | orchestrator | Tuesday 16 September 2025 01:06:21 +0000 (0:00:08.717) 0:00:24.965 ***** 2025-09-16 01:10:37.303696 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-16 01:10:37.303705 | orchestrator | 2025-09-16 01:10:37.303715 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-16 01:10:37.303724 | orchestrator | Tuesday 16 September 2025 01:06:25 +0000 (0:00:03.671) 0:00:28.636 ***** 2025-09-16 01:10:37.304147 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-16 01:10:37.304158 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-16 01:10:37.304205 | orchestrator | 2025-09-16 01:10:37.304216 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-16 01:10:37.304226 | orchestrator | Tuesday 16 September 2025 01:06:32 +0000 (0:00:07.244) 0:00:35.881 ***** 2025-09-16 01:10:37.304235 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-16 01:10:37.304245 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-16 01:10:37.304255 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-16 01:10:37.304264 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-16 01:10:37.304274 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-16 01:10:37.304284 | orchestrator | 2025-09-16 01:10:37.304293 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-16 01:10:37.304303 | orchestrator | Tuesday 16 September 2025 01:06:49 +0000 (0:00:17.017) 0:00:52.899 ***** 2025-09-16 01:10:37.304313 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:10:37.304323 | orchestrator | 2025-09-16 01:10:37.304333 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-16 01:10:37.304342 | orchestrator | Tuesday 16 September 2025 01:06:50 +0000 (0:00:00.626) 0:00:53.525 ***** 2025-09-16 01:10:37.304352 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.304362 | orchestrator | 2025-09-16 01:10:37.304371 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-16 01:10:37.304381 | orchestrator | Tuesday 16 September 2025 01:06:55 +0000 (0:00:04.974) 0:00:58.500 ***** 2025-09-16 01:10:37.304391 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.304401 | orchestrator | 2025-09-16 01:10:37.304702 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-16 01:10:37.304761 | orchestrator | Tuesday 16 September 2025 01:06:59 +0000 (0:00:04.588) 0:01:03.088 ***** 2025-09-16 01:10:37.304773 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:10:37.304783 | orchestrator | 2025-09-16 01:10:37.304792 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-16 01:10:37.304802 | orchestrator | Tuesday 16 September 2025 01:07:03 +0000 (0:00:03.063) 0:01:06.152 ***** 2025-09-16 01:10:37.304812 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-16 01:10:37.304821 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-16 01:10:37.304831 | orchestrator | 2025-09-16 01:10:37.304841 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-16 01:10:37.304850 | orchestrator | Tuesday 16 September 2025 01:07:13 +0000 (0:00:10.918) 0:01:17.071 ***** 2025-09-16 01:10:37.304860 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-16 01:10:37.304870 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-16 01:10:37.304881 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-16 01:10:37.304892 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-16 01:10:37.304902 | orchestrator | 2025-09-16 01:10:37.304912 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-16 01:10:37.304922 | orchestrator | Tuesday 16 September 2025 01:07:32 +0000 (0:00:18.102) 0:01:35.173 ***** 2025-09-16 01:10:37.304931 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.304941 | orchestrator | 2025-09-16 01:10:37.304950 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-16 01:10:37.304959 | orchestrator | Tuesday 16 September 2025 01:07:36 +0000 (0:00:04.940) 0:01:40.113 ***** 2025-09-16 01:10:37.304969 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.304997 | orchestrator | 2025-09-16 01:10:37.305007 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-16 01:10:37.305017 | orchestrator | Tuesday 16 September 2025 01:07:42 +0000 (0:00:05.196) 0:01:45.310 ***** 2025-09-16 01:10:37.305027 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:10:37.305036 | orchestrator | 2025-09-16 01:10:37.305046 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-16 01:10:37.305056 | orchestrator | Tuesday 16 September 2025 01:07:42 +0000 (0:00:00.202) 0:01:45.512 ***** 2025-09-16 01:10:37.305065 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.305075 | orchestrator | 2025-09-16 01:10:37.305084 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-16 01:10:37.305094 | orchestrator | Tuesday 16 September 2025 01:07:47 +0000 (0:00:05.330) 0:01:50.843 ***** 2025-09-16 01:10:37.305103 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:10:37.305113 | orchestrator | 2025-09-16 01:10:37.305129 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-16 01:10:37.305139 | orchestrator | Tuesday 16 September 2025 01:07:48 +0000 (0:00:01.006) 0:01:51.850 ***** 2025-09-16 01:10:37.305149 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:10:37.305159 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:10:37.305191 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.305201 | orchestrator | 2025-09-16 01:10:37.305211 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-16 01:10:37.305220 | orchestrator | Tuesday 16 September 2025 01:07:54 +0000 (0:00:05.523) 0:01:57.373 ***** 2025-09-16 01:10:37.305230 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:10:37.305240 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:10:37.305257 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.305267 | orchestrator | 2025-09-16 01:10:37.305276 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-16 01:10:37.305286 | orchestrator | Tuesday 16 September 2025 01:07:59 +0000 (0:00:05.017) 0:02:02.391 ***** 2025-09-16 01:10:37.305295 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.305307 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:10:37.305318 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:10:37.305328 | orchestrator | 2025-09-16 01:10:37.305340 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-16 01:10:37.305351 | orchestrator | Tuesday 16 September 2025 01:08:00 +0000 (0:00:00.787) 0:02:03.178 ***** 2025-09-16 01:10:37.305362 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:10:37.305373 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:10:37.305384 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:10:37.305395 | orchestrator | 2025-09-16 01:10:37.305406 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-16 01:10:37.305416 | orchestrator | Tuesday 16 September 2025 01:08:02 +0000 (0:00:02.223) 0:02:05.402 ***** 2025-09-16 01:10:37.305428 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:10:37.305439 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.305450 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:10:37.305461 | orchestrator | 2025-09-16 01:10:37.305472 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-16 01:10:37.305483 | orchestrator | Tuesday 16 September 2025 01:08:03 +0000 (0:00:01.309) 0:02:06.711 ***** 2025-09-16 01:10:37.305494 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:10:37.305504 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.305515 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:10:37.305526 | orchestrator | 2025-09-16 01:10:37.305537 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-16 01:10:37.305549 | orchestrator | Tuesday 16 September 2025 01:08:04 +0000 (0:00:01.193) 0:02:07.904 ***** 2025-09-16 01:10:37.305559 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:10:37.305570 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:10:37.305581 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.305592 | orchestrator | 2025-09-16 01:10:37.305634 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-16 01:10:37.305647 | orchestrator | Tuesday 16 September 2025 01:08:06 +0000 (0:00:02.144) 0:02:10.048 ***** 2025-09-16 01:10:37.305659 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.305669 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:10:37.305678 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:10:37.305688 | orchestrator | 2025-09-16 01:10:37.305697 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-16 01:10:37.305707 | orchestrator | Tuesday 16 September 2025 01:08:08 +0000 (0:00:01.578) 0:02:11.627 ***** 2025-09-16 01:10:37.305716 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:10:37.305726 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:10:37.305735 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:10:37.305745 | orchestrator | 2025-09-16 01:10:37.305754 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-16 01:10:37.305764 | orchestrator | Tuesday 16 September 2025 01:08:09 +0000 (0:00:00.806) 0:02:12.433 ***** 2025-09-16 01:10:37.305773 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:10:37.305783 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:10:37.305792 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:10:37.305802 | orchestrator | 2025-09-16 01:10:37.305811 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-16 01:10:37.305821 | orchestrator | Tuesday 16 September 2025 01:08:12 +0000 (0:00:02.695) 0:02:15.129 ***** 2025-09-16 01:10:37.305830 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:10:37.305840 | orchestrator | 2025-09-16 01:10:37.305849 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-16 01:10:37.305866 | orchestrator | Tuesday 16 September 2025 01:08:12 +0000 (0:00:00.518) 0:02:15.647 ***** 2025-09-16 01:10:37.305876 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:10:37.305885 | orchestrator | 2025-09-16 01:10:37.305895 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-16 01:10:37.305904 | orchestrator | Tuesday 16 September 2025 01:08:16 +0000 (0:00:04.263) 0:02:19.910 ***** 2025-09-16 01:10:37.305913 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:10:37.305923 | orchestrator | 2025-09-16 01:10:37.305932 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-16 01:10:37.305942 | orchestrator | Tuesday 16 September 2025 01:08:19 +0000 (0:00:03.184) 0:02:23.095 ***** 2025-09-16 01:10:37.305951 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-16 01:10:37.305961 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-16 01:10:37.305971 | orchestrator | 2025-09-16 01:10:37.305980 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-16 01:10:37.305990 | orchestrator | Tuesday 16 September 2025 01:08:27 +0000 (0:00:07.089) 0:02:30.185 ***** 2025-09-16 01:10:37.305999 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:10:37.306009 | orchestrator | 2025-09-16 01:10:37.306061 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-16 01:10:37.306071 | orchestrator | Tuesday 16 September 2025 01:08:30 +0000 (0:00:03.510) 0:02:33.695 ***** 2025-09-16 01:10:37.306081 | orchestrator | ok: [testbed-node-0] 2025-09-16 01:10:37.306090 | orchestrator | ok: [testbed-node-1] 2025-09-16 01:10:37.306106 | orchestrator | ok: [testbed-node-2] 2025-09-16 01:10:37.306116 | orchestrator | 2025-09-16 01:10:37.306125 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-16 01:10:37.306135 | orchestrator | Tuesday 16 September 2025 01:08:30 +0000 (0:00:00.324) 0:02:34.019 ***** 2025-09-16 01:10:37.306148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 01:10:37.306248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 01:10:37.306262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 01:10:37.306281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-16 01:10:37.306294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-16 01:10:37.306309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-16 01:10:37.306320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.306332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.306368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.306386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.306396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.306407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.306422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:10:37.306433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:10:37.306443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:10:37.306453 | orchestrator | 2025-09-16 01:10:37.306463 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-16 01:10:37.306474 | orchestrator | Tuesday 16 September 2025 01:08:33 +0000 (0:00:02.543) 0:02:36.563 ***** 2025-09-16 01:10:37.306490 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:10:37.306500 | orchestrator | 2025-09-16 01:10:37.306530 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-16 01:10:37.306539 | orchestrator | Tuesday 16 September 2025 01:08:33 +0000 (0:00:00.137) 0:02:36.700 ***** 2025-09-16 01:10:37.306547 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:10:37.306555 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:10:37.306563 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:10:37.306571 | orchestrator | 2025-09-16 01:10:37.306579 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-16 01:10:37.306587 | orchestrator | Tuesday 16 September 2025 01:08:34 +0000 (0:00:00.470) 0:02:37.170 ***** 2025-09-16 01:10:37.306595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-16 01:10:37.306604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-16 01:10:37.306617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.306625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.306634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:10:37.306647 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:10:37.306679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-16 01:10:37.306688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-16 01:10:37.306697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.306714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.306722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:10:37.306731 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:10:37.306739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-16 01:10:37.306774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-16 01:10:37.306784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.306792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.306800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:10:37.306808 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:10:37.306816 | orchestrator | 2025-09-16 01:10:37.306828 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-16 01:10:37.306837 | orchestrator | Tuesday 16 September 2025 01:08:34 +0000 (0:00:00.719) 0:02:37.889 ***** 2025-09-16 01:10:37.306845 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-16 01:10:37.306853 | orchestrator | 2025-09-16 01:10:37.306861 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-16 01:10:37.306869 | orchestrator | Tuesday 16 September 2025 01:08:35 +0000 (0:00:00.541) 0:02:38.431 ***** 2025-09-16 01:10:37.306877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 01:10:37.306912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 01:10:37.306922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 01:10:37.306931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-16 01:10:37.306943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-16 01:10:37.306951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-16 01:10:37.306960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.306971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.306984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.306993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307062 | orchestrator | 2025-09-16 01:10:37.307070 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-16 01:10:37.307078 | orchestrator | Tuesday 16 September 2025 01:08:40 +0000 (0:00:05.105) 0:02:43.537 ***** 2025-09-16 01:10:37.307087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-16 01:10:37.307095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-16 01:10:37.307107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.307115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.307128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:10:37.307136 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:10:37.307148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-16 01:10:37.307157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-16 01:10:37.307165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.307187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.307200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:10:37.307213 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:10:37.307221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-16 01:10:37.307234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-16 01:10:37.307242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.307250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.307259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:10:37.307267 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:10:37.307275 | orchestrator | 2025-09-16 01:10:37.307287 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-16 01:10:37.307295 | orchestrator | Tuesday 16 September 2025 01:08:41 +0000 (0:00:00.713) 0:02:44.251 ***** 2025-09-16 01:10:37.307308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-16 01:10:37.307316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-16 01:10:37.307325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.307337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.307346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:10:37.307354 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:10:37.307362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-16 01:10:37.307382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-16 01:10:37.307391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.307399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.307413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:10:37.307421 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:10:37.307430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-16 01:10:37.307438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-16 01:10:37.307455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.307463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-16 01:10:37.307471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-16 01:10:37.307479 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:10:37.307487 | orchestrator | 2025-09-16 01:10:37.307495 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-16 01:10:37.307503 | orchestrator | Tuesday 16 September 2025 01:08:41 +0000 (0:00:00.818) 0:02:45.069 ***** 2025-09-16 01:10:37.307517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 01:10:37.307526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 01:10:37.307544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 01:10:37.307552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-16 01:10:37.307560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-16 01:10:37.307569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-16 01:10:37.307581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307676 | orchestrator | 2025-09-16 01:10:37.307684 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-16 01:10:37.307692 | orchestrator | Tuesday 16 September 2025 01:08:46 +0000 (0:00:04.866) 0:02:49.935 ***** 2025-09-16 01:10:37.307700 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-16 01:10:37.307708 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-16 01:10:37.307716 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-16 01:10:37.307724 | orchestrator | 2025-09-16 01:10:37.307732 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-16 01:10:37.307739 | orchestrator | Tuesday 16 September 2025 01:08:48 +0000 (0:00:02.092) 0:02:52.028 ***** 2025-09-16 01:10:37.307751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 01:10:37.307760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 01:10:37.307774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 01:10:37.307787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-16 01:10:37.307795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-16 01:10:37.307804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-16 01:10:37.307816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:10:37.307902 | orchestrator | 2025-09-16 01:10:37.307910 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-16 01:10:37.307918 | orchestrator | Tuesday 16 September 2025 01:09:04 +0000 (0:00:15.887) 0:03:07.916 ***** 2025-09-16 01:10:37.307926 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.307934 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:10:37.307942 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:10:37.307950 | orchestrator | 2025-09-16 01:10:37.307958 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-16 01:10:37.307965 | orchestrator | Tuesday 16 September 2025 01:09:06 +0000 (0:00:01.406) 0:03:09.322 ***** 2025-09-16 01:10:37.307981 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-16 01:10:37.307988 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-16 01:10:37.308000 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-16 01:10:37.308008 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-16 01:10:37.308016 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-16 01:10:37.308024 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-16 01:10:37.308032 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-16 01:10:37.308040 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-16 01:10:37.308047 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-16 01:10:37.308055 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-16 01:10:37.308063 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-16 01:10:37.308070 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-16 01:10:37.308078 | orchestrator | 2025-09-16 01:10:37.308086 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-16 01:10:37.308094 | orchestrator | Tuesday 16 September 2025 01:09:11 +0000 (0:00:05.241) 0:03:14.563 ***** 2025-09-16 01:10:37.308101 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-16 01:10:37.308109 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-16 01:10:37.308117 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-16 01:10:37.308125 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-16 01:10:37.308133 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-16 01:10:37.308140 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-16 01:10:37.308148 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-16 01:10:37.308156 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-16 01:10:37.308163 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-16 01:10:37.308216 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-16 01:10:37.308224 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-16 01:10:37.308232 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-16 01:10:37.308240 | orchestrator | 2025-09-16 01:10:37.308247 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-16 01:10:37.308255 | orchestrator | Tuesday 16 September 2025 01:09:16 +0000 (0:00:05.359) 0:03:19.923 ***** 2025-09-16 01:10:37.308263 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-16 01:10:37.308271 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-16 01:10:37.308279 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-16 01:10:37.308286 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-16 01:10:37.308293 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-16 01:10:37.308304 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-16 01:10:37.308311 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-16 01:10:37.308318 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-16 01:10:37.308324 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-16 01:10:37.308330 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-16 01:10:37.308337 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-16 01:10:37.308344 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-16 01:10:37.308350 | orchestrator | 2025-09-16 01:10:37.308357 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-16 01:10:37.308371 | orchestrator | Tuesday 16 September 2025 01:09:22 +0000 (0:00:05.220) 0:03:25.144 ***** 2025-09-16 01:10:37.308378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 01:10:37.308390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 01:10:37.308398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-16 01:10:37.308405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-16 01:10:37.308415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-16 01:10:37.308422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-16 01:10:37.308434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.308444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.308452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.308459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.308465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.308476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-16 01:10:37.308489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:10:37.308496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:10:37.308507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-16 01:10:37.308514 | orchestrator | 2025-09-16 01:10:37.308521 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-16 01:10:37.308528 | orchestrator | Tuesday 16 September 2025 01:09:25 +0000 (0:00:03.805) 0:03:28.949 ***** 2025-09-16 01:10:37.308534 | orchestrator | skipping: [testbed-node-0] 2025-09-16 01:10:37.308541 | orchestrator | skipping: [testbed-node-1] 2025-09-16 01:10:37.308548 | orchestrator | skipping: [testbed-node-2] 2025-09-16 01:10:37.308554 | orchestrator | 2025-09-16 01:10:37.308561 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-16 01:10:37.308568 | orchestrator | Tuesday 16 September 2025 01:09:26 +0000 (0:00:00.303) 0:03:29.252 ***** 2025-09-16 01:10:37.308574 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.308581 | orchestrator | 2025-09-16 01:10:37.308587 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-16 01:10:37.308594 | orchestrator | Tuesday 16 September 2025 01:09:28 +0000 (0:00:02.189) 0:03:31.441 ***** 2025-09-16 01:10:37.308600 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.308607 | orchestrator | 2025-09-16 01:10:37.308614 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-16 01:10:37.308620 | orchestrator | Tuesday 16 September 2025 01:09:30 +0000 (0:00:02.236) 0:03:33.678 ***** 2025-09-16 01:10:37.308627 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.308633 | orchestrator | 2025-09-16 01:10:37.308640 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-16 01:10:37.308647 | orchestrator | Tuesday 16 September 2025 01:09:32 +0000 (0:00:02.203) 0:03:35.882 ***** 2025-09-16 01:10:37.308653 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.308660 | orchestrator | 2025-09-16 01:10:37.308666 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-16 01:10:37.308673 | orchestrator | Tuesday 16 September 2025 01:09:35 +0000 (0:00:02.249) 0:03:38.131 ***** 2025-09-16 01:10:37.308684 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.308691 | orchestrator | 2025-09-16 01:10:37.308697 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-16 01:10:37.308704 | orchestrator | Tuesday 16 September 2025 01:09:54 +0000 (0:00:19.730) 0:03:57.861 ***** 2025-09-16 01:10:37.308710 | orchestrator | 2025-09-16 01:10:37.308717 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-16 01:10:37.308724 | orchestrator | Tuesday 16 September 2025 01:09:54 +0000 (0:00:00.065) 0:03:57.927 ***** 2025-09-16 01:10:37.308730 | orchestrator | 2025-09-16 01:10:37.308737 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-16 01:10:37.308743 | orchestrator | Tuesday 16 September 2025 01:09:54 +0000 (0:00:00.066) 0:03:57.993 ***** 2025-09-16 01:10:37.308750 | orchestrator | 2025-09-16 01:10:37.308756 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-16 01:10:37.308763 | orchestrator | Tuesday 16 September 2025 01:09:54 +0000 (0:00:00.062) 0:03:58.056 ***** 2025-09-16 01:10:37.308769 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.308780 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:10:37.308786 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:10:37.308793 | orchestrator | 2025-09-16 01:10:37.308800 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-16 01:10:37.308806 | orchestrator | Tuesday 16 September 2025 01:10:05 +0000 (0:00:10.945) 0:04:09.001 ***** 2025-09-16 01:10:37.308813 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.308820 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:10:37.308826 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:10:37.308833 | orchestrator | 2025-09-16 01:10:37.308839 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-16 01:10:37.308846 | orchestrator | Tuesday 16 September 2025 01:10:12 +0000 (0:00:06.460) 0:04:15.461 ***** 2025-09-16 01:10:37.308853 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.308859 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:10:37.308866 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:10:37.308872 | orchestrator | 2025-09-16 01:10:37.308879 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-16 01:10:37.308886 | orchestrator | Tuesday 16 September 2025 01:10:22 +0000 (0:00:10.319) 0:04:25.781 ***** 2025-09-16 01:10:37.308892 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:10:37.308899 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:10:37.308905 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.308912 | orchestrator | 2025-09-16 01:10:37.308918 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-16 01:10:37.308925 | orchestrator | Tuesday 16 September 2025 01:10:31 +0000 (0:00:08.394) 0:04:34.176 ***** 2025-09-16 01:10:37.308932 | orchestrator | changed: [testbed-node-0] 2025-09-16 01:10:37.308938 | orchestrator | changed: [testbed-node-1] 2025-09-16 01:10:37.308945 | orchestrator | changed: [testbed-node-2] 2025-09-16 01:10:37.308951 | orchestrator | 2025-09-16 01:10:37.308958 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-16 01:10:37.308965 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-16 01:10:37.308972 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-16 01:10:37.308979 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-16 01:10:37.308985 | orchestrator | 2025-09-16 01:10:37.308992 | orchestrator | 2025-09-16 01:10:37.308999 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-16 01:10:37.309005 | orchestrator | Tuesday 16 September 2025 01:10:36 +0000 (0:00:05.399) 0:04:39.576 ***** 2025-09-16 01:10:37.309015 | orchestrator | =============================================================================== 2025-09-16 01:10:37.309026 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.73s 2025-09-16 01:10:37.309033 | orchestrator | octavia : Add rules for security groups -------------------------------- 18.10s 2025-09-16 01:10:37.309040 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.02s 2025-09-16 01:10:37.309046 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.89s 2025-09-16 01:10:37.309053 | orchestrator | octavia : Restart octavia-api container -------------------------------- 10.95s 2025-09-16 01:10:37.309059 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.92s 2025-09-16 01:10:37.309066 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.32s 2025-09-16 01:10:37.309072 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.72s 2025-09-16 01:10:37.309079 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.39s 2025-09-16 01:10:37.309086 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.24s 2025-09-16 01:10:37.309092 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.16s 2025-09-16 01:10:37.309099 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.09s 2025-09-16 01:10:37.309105 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.46s 2025-09-16 01:10:37.309112 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.52s 2025-09-16 01:10:37.309118 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.40s 2025-09-16 01:10:37.309125 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.36s 2025-09-16 01:10:37.309131 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.33s 2025-09-16 01:10:37.309138 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.24s 2025-09-16 01:10:37.309144 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.22s 2025-09-16 01:10:37.309151 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.20s 2025-09-16 01:10:37.309158 | orchestrator | 2025-09-16 01:10:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:10:40.344658 | orchestrator | 2025-09-16 01:10:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:10:43.388135 | orchestrator | 2025-09-16 01:10:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:10:46.440041 | orchestrator | 2025-09-16 01:10:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:10:49.480585 | orchestrator | 2025-09-16 01:10:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:10:52.521485 | orchestrator | 2025-09-16 01:10:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:10:55.563809 | orchestrator | 2025-09-16 01:10:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:10:58.602560 | orchestrator | 2025-09-16 01:10:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:11:01.643521 | orchestrator | 2025-09-16 01:11:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:11:04.685907 | orchestrator | 2025-09-16 01:11:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:11:07.726832 | orchestrator | 2025-09-16 01:11:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:11:10.769788 | orchestrator | 2025-09-16 01:11:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:11:13.812392 | orchestrator | 2025-09-16 01:11:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:11:16.855493 | orchestrator | 2025-09-16 01:11:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:11:19.894276 | orchestrator | 2025-09-16 01:11:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:11:22.932983 | orchestrator | 2025-09-16 01:11:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:11:25.975333 | orchestrator | 2025-09-16 01:11:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:11:29.021134 | orchestrator | 2025-09-16 01:11:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:11:32.065461 | orchestrator | 2025-09-16 01:11:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:11:35.111847 | orchestrator | 2025-09-16 01:11:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-16 01:11:38.152771 | orchestrator | 2025-09-16 01:11:38.440334 | orchestrator | 2025-09-16 01:11:38.446165 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Tue Sep 16 01:11:38 UTC 2025 2025-09-16 01:11:38.446231 | orchestrator | 2025-09-16 01:11:38.749521 | orchestrator | ok: Runtime: 0:32:09.903373 2025-09-16 01:11:38.997671 | 2025-09-16 01:11:38.997862 | TASK [Bootstrap services] 2025-09-16 01:11:39.733423 | orchestrator | 2025-09-16 01:11:39.733601 | orchestrator | # BOOTSTRAP 2025-09-16 01:11:39.733633 | orchestrator | 2025-09-16 01:11:39.733648 | orchestrator | + set -e 2025-09-16 01:11:39.733662 | orchestrator | + echo 2025-09-16 01:11:39.733675 | orchestrator | + echo '# BOOTSTRAP' 2025-09-16 01:11:39.733693 | orchestrator | + echo 2025-09-16 01:11:39.733737 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-16 01:11:39.741657 | orchestrator | + set -e 2025-09-16 01:11:39.741681 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-16 01:11:43.961117 | orchestrator | 2025-09-16 01:11:43 | INFO  | It takes a moment until task 73421059-6962-4986-8abe-84e6e9c7b6ae (flavor-manager) has been started and output is visible here. 2025-09-16 01:11:47.573050 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-09-16 01:11:47.573141 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:194 │ 2025-09-16 01:11:47.573166 | orchestrator | │ in run │ 2025-09-16 01:11:47.573201 | orchestrator | │ │ 2025-09-16 01:11:47.573213 | orchestrator | │ 191 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-09-16 01:11:47.573236 | orchestrator | │ 192 │ │ 2025-09-16 01:11:47.573248 | orchestrator | │ 193 │ definitions = get_flavor_definitions(name, url) │ 2025-09-16 01:11:47.573260 | orchestrator | │ ❱ 194 │ manager = FlavorManager( │ 2025-09-16 01:11:47.573271 | orchestrator | │ 195 │ │ cloud=Cloud(cloud), │ 2025-09-16 01:11:47.573282 | orchestrator | │ 196 │ │ definitions=definitions, │ 2025-09-16 01:11:47.573293 | orchestrator | │ 197 │ │ recommended=recommended, │ 2025-09-16 01:11:47.573304 | orchestrator | │ │ 2025-09-16 01:11:47.573316 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-16 01:11:47.573338 | orchestrator | │ │ cloud = 'admin' │ │ 2025-09-16 01:11:47.573349 | orchestrator | │ │ debug = False │ │ 2025-09-16 01:11:47.573360 | orchestrator | │ │ definitions = { │ │ 2025-09-16 01:11:47.573372 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-16 01:11:47.573382 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-16 01:11:47.573394 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-16 01:11:47.573405 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-16 01:11:47.573416 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-16 01:11:47.573427 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-16 01:11:47.573438 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-16 01:11:47.573449 | orchestrator | │ │ │ ], │ │ 2025-09-16 01:11:47.573460 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-16 01:11:47.573471 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.573482 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-16 01:11:47.573517 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.573529 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-16 01:11:47.573540 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-16 01:11:47.573551 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-16 01:11:47.573562 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.573572 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-16 01:11:47.573583 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-16 01:11:47.573594 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.573605 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.573616 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.573627 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-16 01:11:47.573638 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.573649 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-16 01:11:47.573659 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-16 01:11:47.573670 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-16 01:11:47.573699 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.573710 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-16 01:11:47.573721 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-16 01:11:47.573732 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.573743 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.573754 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.573765 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-16 01:11:47.573780 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.573792 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-16 01:11:47.573802 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-16 01:11:47.573813 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.573825 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.573836 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-16 01:11:47.573847 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-16 01:11:47.573858 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.573869 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.573880 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.573890 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-16 01:11:47.573901 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.573920 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-16 01:11:47.573931 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-16 01:11:47.573941 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.573953 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.573963 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-16 01:11:47.573974 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-16 01:11:47.573985 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.573995 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.574006 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.574048 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-16 01:11:47.574061 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.574072 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-16 01:11:47.574083 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-16 01:11:47.574093 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.574104 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.574115 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-16 01:11:47.574126 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-16 01:11:47.574136 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.574147 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.574158 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.574183 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-16 01:11:47.574195 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.574211 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-16 01:11:47.574222 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-16 01:11:47.574241 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.599707 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.599734 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-16 01:11:47.599745 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-16 01:11:47.599756 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.599767 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.599778 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.599788 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-16 01:11:47.599799 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.599820 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-16 01:11:47.599831 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-16 01:11:47.599842 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.599853 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.599863 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-16 01:11:47.599874 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-16 01:11:47.599885 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.599895 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.599906 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.599917 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-16 01:11:47.599927 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.599938 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-16 01:11:47.599949 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-16 01:11:47.599960 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.599970 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.599981 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-16 01:11:47.599992 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-16 01:11:47.600002 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.600013 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.600024 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.600034 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-16 01:11:47.600045 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-16 01:11:47.600056 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-16 01:11:47.600067 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-16 01:11:47.600078 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.600089 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.600099 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-16 01:11:47.600110 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-16 01:11:47.600121 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.600138 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.600149 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.600160 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-16 01:11:47.600187 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-16 01:11:47.600204 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-16 01:11:47.600216 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-16 01:11:47.600235 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.600247 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.600257 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-16 01:11:47.600268 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-16 01:11:47.600279 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.600290 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.600301 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-16 01:11:47.600311 | orchestrator | │ │ │ ] │ │ 2025-09-16 01:11:47.600322 | orchestrator | │ │ } │ │ 2025-09-16 01:11:47.600333 | orchestrator | │ │ level = 'INFO' │ │ 2025-09-16 01:11:47.600344 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-16 01:11:47.600355 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-09-16 01:11:47.600365 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-09-16 01:11:47.600376 | orchestrator | │ │ name = 'local' │ │ 2025-09-16 01:11:47.600387 | orchestrator | │ │ recommended = True │ │ 2025-09-16 01:11:47.600398 | orchestrator | │ │ url = None │ │ 2025-09-16 01:11:47.600409 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-16 01:11:47.600422 | orchestrator | │ │ 2025-09-16 01:11:47.600433 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:101 │ 2025-09-16 01:11:47.600444 | orchestrator | │ in __init__ │ 2025-09-16 01:11:47.600455 | orchestrator | │ │ 2025-09-16 01:11:47.600466 | orchestrator | │ 98 │ │ self.required_flavors = definitions["mandatory"] │ 2025-09-16 01:11:47.600476 | orchestrator | │ 99 │ │ self.cloud = cloud │ 2025-09-16 01:11:47.600487 | orchestrator | │ 100 │ │ if recommended: │ 2025-09-16 01:11:47.600498 | orchestrator | │ ❱ 101 │ │ │ recommended_flavors = definitions["recommended"] │ 2025-09-16 01:11:47.600509 | orchestrator | │ 102 │ │ │ # Filter recommended flavors based on memory limit │ 2025-09-16 01:11:47.600520 | orchestrator | │ 103 │ │ │ limit_memory_mb = limit_memory * 1024 │ 2025-09-16 01:11:47.600530 | orchestrator | │ 104 │ │ │ filtered_recommended = [ │ 2025-09-16 01:11:47.600541 | orchestrator | │ │ 2025-09-16 01:11:47.600557 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-16 01:11:47.600574 | orchestrator | │ │ cloud = │ │ 2025-09-16 01:11:47.600596 | orchestrator | │ │ definitions = { │ │ 2025-09-16 01:11:47.600607 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-16 01:11:47.600618 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-16 01:11:47.600629 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-16 01:11:47.600640 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-16 01:11:47.600651 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-16 01:11:47.600662 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-16 01:11:47.600673 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-16 01:11:47.600683 | orchestrator | │ │ │ ], │ │ 2025-09-16 01:11:47.600694 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-16 01:11:47.600710 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.626131 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-16 01:11:47.626158 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.626190 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-16 01:11:47.626202 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-16 01:11:47.626214 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-16 01:11:47.626225 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.626236 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-16 01:11:47.626247 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-16 01:11:47.626259 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.626270 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.626281 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.626291 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-16 01:11:47.626302 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.626313 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-16 01:11:47.626324 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-16 01:11:47.626335 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-16 01:11:47.626346 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.626357 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-16 01:11:47.626368 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-16 01:11:47.626379 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.626400 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.626412 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.626422 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-16 01:11:47.626433 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.626444 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-16 01:11:47.626455 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-16 01:11:47.626466 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.626477 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.626488 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-16 01:11:47.626499 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-16 01:11:47.626510 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.626521 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.626538 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.626550 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-16 01:11:47.626561 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.626571 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-16 01:11:47.626582 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-16 01:11:47.626593 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.626604 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.626615 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-16 01:11:47.626626 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-16 01:11:47.626637 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.626648 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.626671 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.626689 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-16 01:11:47.626701 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.626711 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-16 01:11:47.626723 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-16 01:11:47.626734 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.626744 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.626755 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-16 01:11:47.626766 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-16 01:11:47.626777 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.626793 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.626804 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.626815 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-16 01:11:47.626826 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.626836 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-16 01:11:47.626847 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-16 01:11:47.626858 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.626869 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.626879 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-16 01:11:47.626890 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-16 01:11:47.626901 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.626912 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.626922 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.626933 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-16 01:11:47.626944 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.626954 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-16 01:11:47.626965 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-16 01:11:47.626976 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.626988 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.626998 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-16 01:11:47.627009 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-16 01:11:47.627020 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.627030 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.627041 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.627052 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-16 01:11:47.627062 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-16 01:11:47.627073 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-16 01:11:47.627084 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-16 01:11:47.627094 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.627105 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.627116 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-16 01:11:47.627126 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-16 01:11:47.627137 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.627158 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.660849 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.660871 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-16 01:11:47.660895 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-16 01:11:47.660907 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-16 01:11:47.660917 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-16 01:11:47.660928 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.660939 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.660950 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-16 01:11:47.660960 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-16 01:11:47.660971 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.660982 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.660993 | orchestrator | │ │ │ │ { │ │ 2025-09-16 01:11:47.661003 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-16 01:11:47.661014 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-16 01:11:47.661025 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-16 01:11:47.661035 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-16 01:11:47.661046 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-16 01:11:47.661057 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-16 01:11:47.661067 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-16 01:11:47.661078 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-16 01:11:47.661089 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-16 01:11:47.661099 | orchestrator | │ │ │ │ }, │ │ 2025-09-16 01:11:47.661110 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-16 01:11:47.661121 | orchestrator | │ │ │ ] │ │ 2025-09-16 01:11:47.661131 | orchestrator | │ │ } │ │ 2025-09-16 01:11:47.661142 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-16 01:11:47.661153 | orchestrator | │ │ recommended = True │ │ 2025-09-16 01:11:47.661163 | orchestrator | │ │ self = │ │ 2025-09-16 01:11:47.661209 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-16 01:11:47.661221 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-09-16 01:11:47.661241 | orchestrator | KeyError: 'recommended' 2025-09-16 01:11:48.074124 | orchestrator | ERROR 2025-09-16 01:11:48.074534 | orchestrator | { 2025-09-16 01:11:48.074641 | orchestrator | "delta": "0:00:08.596818", 2025-09-16 01:11:48.074715 | orchestrator | "end": "2025-09-16 01:11:47.937039", 2025-09-16 01:11:48.074778 | orchestrator | "msg": "non-zero return code", 2025-09-16 01:11:48.074888 | orchestrator | "rc": 1, 2025-09-16 01:11:48.074954 | orchestrator | "start": "2025-09-16 01:11:39.340221" 2025-09-16 01:11:48.075008 | orchestrator | } failure 2025-09-16 01:11:48.098295 | 2025-09-16 01:11:48.098432 | PLAY RECAP 2025-09-16 01:11:48.098516 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-16 01:11:48.098558 | 2025-09-16 01:11:48.335233 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-16 01:11:48.336453 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-16 01:11:49.073431 | 2025-09-16 01:11:49.073601 | PLAY [Post output play] 2025-09-16 01:11:49.089931 | 2025-09-16 01:11:49.090114 | LOOP [stage-output : Register sources] 2025-09-16 01:11:49.142606 | 2025-09-16 01:11:49.142854 | TASK [stage-output : Check sudo] 2025-09-16 01:11:49.969363 | orchestrator | sudo: a password is required 2025-09-16 01:11:50.179256 | orchestrator | ok: Runtime: 0:00:00.013878 2025-09-16 01:11:50.191464 | 2025-09-16 01:11:50.191635 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-16 01:11:50.235905 | 2025-09-16 01:11:50.236215 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-16 01:11:50.304733 | orchestrator | ok 2025-09-16 01:11:50.313289 | 2025-09-16 01:11:50.313418 | LOOP [stage-output : Ensure target folders exist] 2025-09-16 01:11:50.762361 | orchestrator | ok: "docs" 2025-09-16 01:11:50.762674 | 2025-09-16 01:11:50.991250 | orchestrator | ok: "artifacts" 2025-09-16 01:11:51.203729 | orchestrator | ok: "logs" 2025-09-16 01:11:51.227106 | 2025-09-16 01:11:51.227276 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-16 01:11:51.269088 | 2025-09-16 01:11:51.269421 | TASK [stage-output : Make all log files readable] 2025-09-16 01:11:51.524530 | orchestrator | ok 2025-09-16 01:11:51.534146 | 2025-09-16 01:11:51.534292 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-16 01:11:51.570515 | orchestrator | skipping: Conditional result was False 2025-09-16 01:11:51.588388 | 2025-09-16 01:11:51.588579 | TASK [stage-output : Discover log files for compression] 2025-09-16 01:11:51.605820 | orchestrator | skipping: Conditional result was False 2025-09-16 01:11:51.620095 | 2025-09-16 01:11:51.620259 | LOOP [stage-output : Archive everything from logs] 2025-09-16 01:11:51.668176 | 2025-09-16 01:11:51.668372 | PLAY [Post cleanup play] 2025-09-16 01:11:51.677204 | 2025-09-16 01:11:51.677315 | TASK [Set cloud fact (Zuul deployment)] 2025-09-16 01:11:51.742217 | orchestrator | ok 2025-09-16 01:11:51.754675 | 2025-09-16 01:11:51.754893 | TASK [Set cloud fact (local deployment)] 2025-09-16 01:11:51.791197 | orchestrator | skipping: Conditional result was False 2025-09-16 01:11:51.803718 | 2025-09-16 01:11:51.803857 | TASK [Clean the cloud environment] 2025-09-16 01:11:52.324102 | orchestrator | 2025-09-16 01:11:52 - clean up servers 2025-09-16 01:11:53.071486 | orchestrator | 2025-09-16 01:11:53 - testbed-manager 2025-09-16 01:11:53.158375 | orchestrator | 2025-09-16 01:11:53 - testbed-node-2 2025-09-16 01:11:53.248460 | orchestrator | 2025-09-16 01:11:53 - testbed-node-0 2025-09-16 01:11:53.335133 | orchestrator | 2025-09-16 01:11:53 - testbed-node-4 2025-09-16 01:11:53.424818 | orchestrator | 2025-09-16 01:11:53 - testbed-node-3 2025-09-16 01:11:53.519816 | orchestrator | 2025-09-16 01:11:53 - testbed-node-5 2025-09-16 01:11:53.628987 | orchestrator | 2025-09-16 01:11:53 - testbed-node-1 2025-09-16 01:11:53.719334 | orchestrator | 2025-09-16 01:11:53 - clean up keypairs 2025-09-16 01:11:53.739404 | orchestrator | 2025-09-16 01:11:53 - testbed 2025-09-16 01:11:53.765576 | orchestrator | 2025-09-16 01:11:53 - wait for servers to be gone 2025-09-16 01:12:02.445071 | orchestrator | 2025-09-16 01:12:02 - clean up ports 2025-09-16 01:12:02.632821 | orchestrator | 2025-09-16 01:12:02 - 2698c67a-16d6-4e35-8895-c331d15f49cf 2025-09-16 01:12:02.881151 | orchestrator | 2025-09-16 01:12:02 - 79f7fda5-63cf-4403-b35e-7257b6930d4f 2025-09-16 01:12:03.205837 | orchestrator | 2025-09-16 01:12:03 - 945bb886-9d0e-48e1-aa0b-b659c9e29d36 2025-09-16 01:12:03.418883 | orchestrator | 2025-09-16 01:12:03 - 9f50aac6-69cc-4e43-83ff-a242b48890f4 2025-09-16 01:12:03.691447 | orchestrator | 2025-09-16 01:12:03 - cf7728e9-045a-4fc3-a883-36ba796149a3 2025-09-16 01:12:03.909779 | orchestrator | 2025-09-16 01:12:03 - cfae3754-b549-4b79-94f9-46e727756f91 2025-09-16 01:12:04.323231 | orchestrator | 2025-09-16 01:12:04 - e1e06eba-8896-4378-8c5a-9e6c356cc9ae 2025-09-16 01:12:04.565691 | orchestrator | 2025-09-16 01:12:04 - clean up volumes 2025-09-16 01:12:04.826234 | orchestrator | 2025-09-16 01:12:04 - testbed-volume-0-node-base 2025-09-16 01:12:04.864198 | orchestrator | 2025-09-16 01:12:04 - testbed-volume-manager-base 2025-09-16 01:12:04.904420 | orchestrator | 2025-09-16 01:12:04 - testbed-volume-5-node-base 2025-09-16 01:12:04.943267 | orchestrator | 2025-09-16 01:12:04 - testbed-volume-4-node-base 2025-09-16 01:12:04.983843 | orchestrator | 2025-09-16 01:12:04 - testbed-volume-2-node-base 2025-09-16 01:12:05.025153 | orchestrator | 2025-09-16 01:12:05 - testbed-volume-1-node-base 2025-09-16 01:12:05.064857 | orchestrator | 2025-09-16 01:12:05 - testbed-volume-3-node-base 2025-09-16 01:12:05.105024 | orchestrator | 2025-09-16 01:12:05 - testbed-volume-5-node-5 2025-09-16 01:12:05.145009 | orchestrator | 2025-09-16 01:12:05 - testbed-volume-0-node-3 2025-09-16 01:12:05.183267 | orchestrator | 2025-09-16 01:12:05 - testbed-volume-1-node-4 2025-09-16 01:12:05.224829 | orchestrator | 2025-09-16 01:12:05 - testbed-volume-2-node-5 2025-09-16 01:12:05.266298 | orchestrator | 2025-09-16 01:12:05 - testbed-volume-8-node-5 2025-09-16 01:12:05.307200 | orchestrator | 2025-09-16 01:12:05 - testbed-volume-6-node-3 2025-09-16 01:12:05.368077 | orchestrator | 2025-09-16 01:12:05 - testbed-volume-4-node-4 2025-09-16 01:12:05.414388 | orchestrator | 2025-09-16 01:12:05 - testbed-volume-7-node-4 2025-09-16 01:12:05.458721 | orchestrator | 2025-09-16 01:12:05 - testbed-volume-3-node-3 2025-09-16 01:12:05.505163 | orchestrator | 2025-09-16 01:12:05 - disconnect routers 2025-09-16 01:12:05.629361 | orchestrator | 2025-09-16 01:12:05 - testbed 2025-09-16 01:12:06.643323 | orchestrator | 2025-09-16 01:12:06 - clean up subnets 2025-09-16 01:12:06.693702 | orchestrator | 2025-09-16 01:12:06 - subnet-testbed-management 2025-09-16 01:12:06.846144 | orchestrator | 2025-09-16 01:12:06 - clean up networks 2025-09-16 01:12:07.016218 | orchestrator | 2025-09-16 01:12:07 - net-testbed-management 2025-09-16 01:12:07.303831 | orchestrator | 2025-09-16 01:12:07 - clean up security groups 2025-09-16 01:12:07.348552 | orchestrator | 2025-09-16 01:12:07 - testbed-node 2025-09-16 01:12:07.483260 | orchestrator | 2025-09-16 01:12:07 - testbed-management 2025-09-16 01:12:07.604329 | orchestrator | 2025-09-16 01:12:07 - clean up floating ips 2025-09-16 01:12:07.646239 | orchestrator | 2025-09-16 01:12:07 - 81.163.193.163 2025-09-16 01:12:08.009856 | orchestrator | 2025-09-16 01:12:08 - clean up routers 2025-09-16 01:12:08.067948 | orchestrator | 2025-09-16 01:12:08 - testbed 2025-09-16 01:12:09.856958 | orchestrator | ok: Runtime: 0:00:17.312707 2025-09-16 01:12:09.861315 | 2025-09-16 01:12:09.861462 | PLAY RECAP 2025-09-16 01:12:09.861580 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-16 01:12:09.861631 | 2025-09-16 01:12:09.985751 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-16 01:12:09.988244 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-16 01:12:10.790810 | 2025-09-16 01:12:10.791078 | PLAY [Cleanup play] 2025-09-16 01:12:10.809358 | 2025-09-16 01:12:10.809521 | TASK [Set cloud fact (Zuul deployment)] 2025-09-16 01:12:10.865304 | orchestrator | ok 2025-09-16 01:12:10.874200 | 2025-09-16 01:12:10.874353 | TASK [Set cloud fact (local deployment)] 2025-09-16 01:12:10.900616 | orchestrator | skipping: Conditional result was False 2025-09-16 01:12:10.916548 | 2025-09-16 01:12:10.916681 | TASK [Clean the cloud environment] 2025-09-16 01:12:12.030061 | orchestrator | 2025-09-16 01:12:12 - clean up servers 2025-09-16 01:12:12.503320 | orchestrator | 2025-09-16 01:12:12 - clean up keypairs 2025-09-16 01:12:12.520256 | orchestrator | 2025-09-16 01:12:12 - wait for servers to be gone 2025-09-16 01:12:12.563222 | orchestrator | 2025-09-16 01:12:12 - clean up ports 2025-09-16 01:12:12.633505 | orchestrator | 2025-09-16 01:12:12 - clean up volumes 2025-09-16 01:12:12.691008 | orchestrator | 2025-09-16 01:12:12 - disconnect routers 2025-09-16 01:12:12.710413 | orchestrator | 2025-09-16 01:12:12 - clean up subnets 2025-09-16 01:12:12.732255 | orchestrator | 2025-09-16 01:12:12 - clean up networks 2025-09-16 01:12:12.854683 | orchestrator | 2025-09-16 01:12:12 - clean up security groups 2025-09-16 01:12:12.885141 | orchestrator | 2025-09-16 01:12:12 - clean up floating ips 2025-09-16 01:12:12.910487 | orchestrator | 2025-09-16 01:12:12 - clean up routers 2025-09-16 01:12:13.454724 | orchestrator | ok: Runtime: 0:00:01.260207 2025-09-16 01:12:13.458597 | 2025-09-16 01:12:13.458758 | PLAY RECAP 2025-09-16 01:12:13.458915 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-16 01:12:13.458983 | 2025-09-16 01:12:13.577529 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-16 01:12:13.579534 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-16 01:12:14.349533 | 2025-09-16 01:12:14.349686 | PLAY [Base post-fetch] 2025-09-16 01:12:14.364728 | 2025-09-16 01:12:14.364852 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-16 01:12:14.410461 | orchestrator | skipping: Conditional result was False 2025-09-16 01:12:14.424830 | 2025-09-16 01:12:14.425041 | TASK [fetch-output : Set log path for single node] 2025-09-16 01:12:14.483521 | orchestrator | ok 2025-09-16 01:12:14.493237 | 2025-09-16 01:12:14.493397 | LOOP [fetch-output : Ensure local output dirs] 2025-09-16 01:12:14.956129 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/a8a25034b5de43c9aad8dd2bdd5f1f51/work/logs" 2025-09-16 01:12:15.226792 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a8a25034b5de43c9aad8dd2bdd5f1f51/work/artifacts" 2025-09-16 01:12:15.498740 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a8a25034b5de43c9aad8dd2bdd5f1f51/work/docs" 2025-09-16 01:12:15.527193 | 2025-09-16 01:12:15.527369 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-16 01:12:16.453844 | orchestrator | changed: .d..t...... ./ 2025-09-16 01:12:16.454261 | orchestrator | changed: All items complete 2025-09-16 01:12:16.454327 | 2025-09-16 01:12:17.171186 | orchestrator | changed: .d..t...... ./ 2025-09-16 01:12:17.887076 | orchestrator | changed: .d..t...... ./ 2025-09-16 01:12:17.913914 | 2025-09-16 01:12:17.914096 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-16 01:12:17.951385 | orchestrator | skipping: Conditional result was False 2025-09-16 01:12:17.954068 | orchestrator | skipping: Conditional result was False 2025-09-16 01:12:17.977419 | 2025-09-16 01:12:17.977532 | PLAY RECAP 2025-09-16 01:12:17.977612 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-16 01:12:17.977658 | 2025-09-16 01:12:18.098458 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-16 01:12:18.100946 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-16 01:12:18.837298 | 2025-09-16 01:12:18.837449 | PLAY [Base post] 2025-09-16 01:12:18.852152 | 2025-09-16 01:12:18.852284 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-16 01:12:19.785648 | orchestrator | changed 2025-09-16 01:12:19.796292 | 2025-09-16 01:12:19.796410 | PLAY RECAP 2025-09-16 01:12:19.796488 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-16 01:12:19.796566 | 2025-09-16 01:12:19.908200 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-16 01:12:19.909193 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-16 01:12:20.675894 | 2025-09-16 01:12:20.676077 | PLAY [Base post-logs] 2025-09-16 01:12:20.686614 | 2025-09-16 01:12:20.686742 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-16 01:12:21.141160 | localhost | changed 2025-09-16 01:12:21.157569 | 2025-09-16 01:12:21.157747 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-16 01:12:21.194338 | localhost | ok 2025-09-16 01:12:21.197996 | 2025-09-16 01:12:21.198130 | TASK [Set zuul-log-path fact] 2025-09-16 01:12:21.213836 | localhost | ok 2025-09-16 01:12:21.223235 | 2025-09-16 01:12:21.223359 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-16 01:12:21.258764 | localhost | ok 2025-09-16 01:12:21.262309 | 2025-09-16 01:12:21.262422 | TASK [upload-logs : Create log directories] 2025-09-16 01:12:21.739875 | localhost | changed 2025-09-16 01:12:21.745311 | 2025-09-16 01:12:21.745464 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-16 01:12:22.267372 | localhost -> localhost | ok: Runtime: 0:00:00.006748 2025-09-16 01:12:22.274071 | 2025-09-16 01:12:22.274218 | TASK [upload-logs : Upload logs to log server] 2025-09-16 01:12:22.821073 | localhost | Output suppressed because no_log was given 2025-09-16 01:12:22.823905 | 2025-09-16 01:12:22.824100 | LOOP [upload-logs : Compress console log and json output] 2025-09-16 01:12:22.885557 | localhost | skipping: Conditional result was False 2025-09-16 01:12:22.889469 | localhost | skipping: Conditional result was False 2025-09-16 01:12:22.896260 | 2025-09-16 01:12:22.896465 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-16 01:12:22.951914 | localhost | skipping: Conditional result was False 2025-09-16 01:12:22.952554 | 2025-09-16 01:12:22.955819 | localhost | skipping: Conditional result was False 2025-09-16 01:12:22.969328 | 2025-09-16 01:12:22.969549 | LOOP [upload-logs : Upload console log and json output]